=== RUN TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run: out/minikube-linux-amd64 -p multinode-858631 node start m03 --alsologtostderr
E0224 01:03:40.892558 11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 node start m03 --alsologtostderr: exit status 90 (18.189438331s)
-- stdout --
* Starting worker node multinode-858631-m03 in cluster multinode-858631
* Restarting existing kvm2 VM for "multinode-858631-m03" ...
-- /stdout --
** stderr **
I0224 01:03:32.260717 24114 out.go:296] Setting OutFile to fd 1 ...
I0224 01:03:32.260942 24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:03:32.260964 24114 out.go:309] Setting ErrFile to fd 2...
I0224 01:03:32.260971 24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:03:32.261443 24114 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
I0224 01:03:32.261806 24114 mustload.go:65] Loading cluster: multinode-858631
I0224 01:03:32.262145 24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:03:32.262456 24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:03:32.262493 24114 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:03:32.277174 24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
I0224 01:03:32.277645 24114 main.go:141] libmachine: () Calling .GetVersion
I0224 01:03:32.278156 24114 main.go:141] libmachine: Using API Version 1
I0224 01:03:32.278177 24114 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:03:32.278511 24114 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:03:32.278690 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
W0224 01:03:32.280169 24114 host.go:58] "multinode-858631-m03" host status: Stopped
I0224 01:03:32.282511 24114 out.go:177] * Starting worker node multinode-858631-m03 in cluster multinode-858631
I0224 01:03:32.283904 24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:03:32.283939 24114 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0224 01:03:32.283950 24114 cache.go:57] Caching tarball of preloaded images
I0224 01:03:32.284043 24114 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0224 01:03:32.284058 24114 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0224 01:03:32.284190 24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:03:32.284400 24114 cache.go:193] Successfully downloaded all kic artifacts
I0224 01:03:32.284445 24114 start.go:364] acquiring machines lock for multinode-858631-m03: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0224 01:03:32.284513 24114 start.go:368] acquired machines lock for "multinode-858631-m03" in 36.132µs
I0224 01:03:32.284537 24114 start.go:96] Skipping create...Using existing machine configuration
I0224 01:03:32.284549 24114 fix.go:55] fixHost starting: m03
I0224 01:03:32.284825 24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:03:32.284858 24114 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:03:32.298672 24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
I0224 01:03:32.299062 24114 main.go:141] libmachine: () Calling .GetVersion
I0224 01:03:32.299472 24114 main.go:141] libmachine: Using API Version 1
I0224 01:03:32.299496 24114 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:03:32.299859 24114 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:03:32.300041 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:32.300197 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
I0224 01:03:32.301607 24114 fix.go:103] recreateIfNeeded on multinode-858631-m03: state=Stopped err=<nil>
I0224 01:03:32.301631 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
W0224 01:03:32.301795 24114 fix.go:129] unexpected machine state, will restart: <nil>
I0224 01:03:32.303890 24114 out.go:177] * Restarting existing kvm2 VM for "multinode-858631-m03" ...
I0224 01:03:32.305217 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .Start
I0224 01:03:32.305402 24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring networks are active...
I0224 01:03:32.306146 24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network default is active
I0224 01:03:32.306514 24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network mk-multinode-858631 is active
I0224 01:03:32.306956 24114 main.go:141] libmachine: (multinode-858631-m03) Getting domain xml...
I0224 01:03:32.307642 24114 main.go:141] libmachine: (multinode-858631-m03) Creating domain...
I0224 01:03:33.535122 24114 main.go:141] libmachine: (multinode-858631-m03) Waiting to get IP...
I0224 01:03:33.536096 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.536497 24114 main.go:141] libmachine: (multinode-858631-m03) Found IP for machine: 192.168.39.240
I0224 01:03:33.536523 24114 main.go:141] libmachine: (multinode-858631-m03) Reserving static IP address...
I0224 01:03:33.536541 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has current primary IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.537088 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:33.537115 24114 main.go:141] libmachine: (multinode-858631-m03) Reserved static IP address: 192.168.39.240
I0224 01:03:33.537132 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | skip adding static IP to network mk-multinode-858631 - found existing host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"}
I0224 01:03:33.537149 24114 main.go:141] libmachine: (multinode-858631-m03) Waiting for SSH to be available...
I0224 01:03:33.537176 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Getting to WaitForSSH function...
I0224 01:03:33.539150 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.539490 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:33.539546 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.539598 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH client type: external
I0224 01:03:33.539625 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa (-rw-------)
I0224 01:03:33.539663 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0224 01:03:33.539685 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | About to run SSH command:
I0224 01:03:33.539701 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | exit 0
I0224 01:03:45.637148 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | SSH cmd err, output: <nil>:
I0224 01:03:45.637538 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetConfigRaw
I0224 01:03:45.638256 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:45.640589 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.640953 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.640999 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.641231 24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:03:45.641390 24114 machine.go:88] provisioning docker machine ...
I0224 01:03:45.641406 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:45.641606 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.641772 24114 buildroot.go:166] provisioning hostname "multinode-858631-m03"
I0224 01:03:45.641789 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.641914 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.644042 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.644382 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.644408 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.644536 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:45.644688 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.644813 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.644897 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:45.645054 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:45.645540 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:45.645558 24114 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-858631-m03 && echo "multinode-858631-m03" | sudo tee /etc/hostname
I0224 01:03:45.781731 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631-m03
I0224 01:03:45.781763 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.784434 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.784871 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.784902 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.785048 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:45.785217 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.785340 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.785461 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:45.785613 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:45.786019 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:45.786037 24114 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-858631-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-858631-m03' | sudo tee -a /etc/hosts;
fi
fi
I0224 01:03:45.909630 24114 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 01:03:45.909671 24114 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
I0224 01:03:45.909708 24114 buildroot.go:174] setting up certificates
I0224 01:03:45.909717 24114 provision.go:83] configureAuth start
I0224 01:03:45.909726 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.909926 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:45.912640 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.912983 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.913005 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.913176 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.915338 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.915697 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.915738 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.915804 24114 provision.go:138] copyHostCerts
I0224 01:03:45.915872 24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
I0224 01:03:45.915893 24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:03:45.915970 24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
I0224 01:03:45.916080 24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
I0224 01:03:45.916091 24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:03:45.916128 24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
I0224 01:03:45.916214 24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
I0224 01:03:45.916224 24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:03:45.916256 24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
I0224 01:03:45.916320 24114 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631-m03 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-858631-m03]
I0224 01:03:46.139019 24114 provision.go:172] copyRemoteCerts
I0224 01:03:46.139085 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 01:03:46.139111 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.141414 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.141764 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.141802 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.142019 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.142249 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.142398 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.142564 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:46.230066 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0224 01:03:46.252894 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0224 01:03:46.275597 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0224 01:03:46.297736 24114 provision.go:86] duration metric: configureAuth took 388.009002ms
I0224 01:03:46.297760 24114 buildroot.go:189] setting minikube options for container-runtime
I0224 01:03:46.297955 24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:03:46.297975 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:46.298205 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.300343 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.300707 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.300726 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.300874 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.300999 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.301111 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.301213 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.301379 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.301823 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.301836 24114 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0224 01:03:46.419110 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0224 01:03:46.419136 24114 buildroot.go:70] root file system type: tmpfs
I0224 01:03:46.419278 24114 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 01:03:46.419300 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.421650 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.422035 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.422074 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.422208 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.422385 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.422503 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.422600 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.422780 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.423174 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.423237 24114 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 01:03:46.549929 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0224 01:03:46.549963 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.552418 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.552729 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.552756 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.552886 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.553084 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.553255 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.553414 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.553596 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.554026 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.554049 24114 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0224 01:03:47.346622 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0224 01:03:47.346647 24114 machine.go:91] provisioned docker machine in 1.705245446s
I0224 01:03:47.346658 24114 start.go:300] post-start starting for "multinode-858631-m03" (driver="kvm2")
I0224 01:03:47.346666 24114 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 01:03:47.346689 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.346962 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 01:03:47.346988 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.349581 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.349992 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.350017 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.350172 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.350362 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.350549 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.350700 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.439110 24114 ssh_runner.go:195] Run: cat /etc/os-release
I0224 01:03:47.443190 24114 info.go:137] Remote host: Buildroot 2021.02.12
I0224 01:03:47.443208 24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
I0224 01:03:47.443272 24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
I0224 01:03:47.443345 24114 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
I0224 01:03:47.443425 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 01:03:47.451605 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:03:47.477354 24114 start.go:303] post-start completed in 130.684659ms
I0224 01:03:47.477374 24114 fix.go:57] fixHost completed within 15.192824116s
I0224 01:03:47.477397 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.480041 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.480408 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.480432 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.480585 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.480775 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.480910 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.481050 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.481200 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:47.481636 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:47.481650 24114 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0224 01:03:47.598085 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200627.547433994
I0224 01:03:47.598107 24114 fix.go:207] guest clock: 1677200627.547433994
I0224 01:03:47.598117 24114 fix.go:220] Guest: 2023-02-24 01:03:47.547433994 +0000 UTC Remote: 2023-02-24 01:03:47.477378328 +0000 UTC m=+15.254967977 (delta=70.055666ms)
I0224 01:03:47.598162 24114 fix.go:191] guest clock delta is within tolerance: 70.055666ms
I0224 01:03:47.598169 24114 start.go:83] releasing machines lock for "multinode-858631-m03", held for 15.313644124s
I0224 01:03:47.598196 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.598466 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:47.601108 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.601449 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.601500 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.601635 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602113 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602297 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602410 24114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 01:03:47.602454 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.602579 24114 ssh_runner.go:195] Run: systemctl --version
I0224 01:03:47.602608 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.604986 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605313 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.605341 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605437 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605620 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.605810 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.605871 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.605901 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605962 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.606130 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.606155 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.606285 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.606403 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.606501 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.698434 24114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0224 01:03:47.720616 24114 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0224 01:03:47.720682 24114 ssh_runner.go:195] Run: which cri-dockerd
I0224 01:03:47.724288 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0224 01:03:47.733533 24114 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0224 01:03:47.749554 24114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 01:03:47.765014 24114 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0224 01:03:47.765035 24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:03:47.765118 24114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:03:47.791918 24114 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0224 01:03:47.791947 24114 docker.go:560] Images already preloaded, skipping extraction
I0224 01:03:47.791955 24114 start.go:485] detecting cgroup driver to use...
I0224 01:03:47.792040 24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:03:47.809788 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0224 01:03:47.819164 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 01:03:47.829096 24114 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 01:03:47.829135 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 01:03:47.839183 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:03:47.849277 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 01:03:47.859033 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:03:47.869193 24114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 01:03:47.879734 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 01:03:47.890162 24114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 01:03:47.899715 24114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 01:03:47.908689 24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:48.018034 24114 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 01:03:48.035589 24114 start.go:485] detecting cgroup driver to use...
I0224 01:03:48.035666 24114 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0224 01:03:48.052927 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:03:48.073823 24114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0224 01:03:48.093543 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:03:48.106104 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:03:48.118539 24114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 01:03:48.148341 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:03:48.160647 24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:03:48.180346 24114 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0224 01:03:48.302273 24114 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0224 01:03:48.409590 24114 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0224 01:03:48.409616 24114 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0224 01:03:48.426383 24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:48.530953 24114 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:03:49.936371 24114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405376904s)
I0224 01:03:49.936434 24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:03:50.053226 24114 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0224 01:03:50.173114 24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:03:50.268183 24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:50.380907 24114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0224 01:03:50.398249 24114 out.go:177]
W0224 01:03:50.399656 24114 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0224 01:03:50.399672 24114 out.go:239] *
*
W0224 01:03:50.402548 24114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0224 01:03:50.403866 24114 out.go:177]
** /stderr **
multinode_test.go:254: I0224 01:03:32.260717 24114 out.go:296] Setting OutFile to fd 1 ...
I0224 01:03:32.260942 24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:03:32.260964 24114 out.go:309] Setting ErrFile to fd 2...
I0224 01:03:32.260971 24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:03:32.261443 24114 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
I0224 01:03:32.261806 24114 mustload.go:65] Loading cluster: multinode-858631
I0224 01:03:32.262145 24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:03:32.262456 24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:03:32.262493 24114 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:03:32.277174 24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
I0224 01:03:32.277645 24114 main.go:141] libmachine: () Calling .GetVersion
I0224 01:03:32.278156 24114 main.go:141] libmachine: Using API Version 1
I0224 01:03:32.278177 24114 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:03:32.278511 24114 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:03:32.278690 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
W0224 01:03:32.280169 24114 host.go:58] "multinode-858631-m03" host status: Stopped
I0224 01:03:32.282511 24114 out.go:177] * Starting worker node multinode-858631-m03 in cluster multinode-858631
I0224 01:03:32.283904 24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:03:32.283939 24114 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0224 01:03:32.283950 24114 cache.go:57] Caching tarball of preloaded images
I0224 01:03:32.284043 24114 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0224 01:03:32.284058 24114 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0224 01:03:32.284190 24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:03:32.284400 24114 cache.go:193] Successfully downloaded all kic artifacts
I0224 01:03:32.284445 24114 start.go:364] acquiring machines lock for multinode-858631-m03: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0224 01:03:32.284513 24114 start.go:368] acquired machines lock for "multinode-858631-m03" in 36.132µs
I0224 01:03:32.284537 24114 start.go:96] Skipping create...Using existing machine configuration
I0224 01:03:32.284549 24114 fix.go:55] fixHost starting: m03
I0224 01:03:32.284825 24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:03:32.284858 24114 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:03:32.298672 24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
I0224 01:03:32.299062 24114 main.go:141] libmachine: () Calling .GetVersion
I0224 01:03:32.299472 24114 main.go:141] libmachine: Using API Version 1
I0224 01:03:32.299496 24114 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:03:32.299859 24114 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:03:32.300041 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:32.300197 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
I0224 01:03:32.301607 24114 fix.go:103] recreateIfNeeded on multinode-858631-m03: state=Stopped err=<nil>
I0224 01:03:32.301631 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
W0224 01:03:32.301795 24114 fix.go:129] unexpected machine state, will restart: <nil>
I0224 01:03:32.303890 24114 out.go:177] * Restarting existing kvm2 VM for "multinode-858631-m03" ...
I0224 01:03:32.305217 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .Start
I0224 01:03:32.305402 24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring networks are active...
I0224 01:03:32.306146 24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network default is active
I0224 01:03:32.306514 24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network mk-multinode-858631 is active
I0224 01:03:32.306956 24114 main.go:141] libmachine: (multinode-858631-m03) Getting domain xml...
I0224 01:03:32.307642 24114 main.go:141] libmachine: (multinode-858631-m03) Creating domain...
I0224 01:03:33.535122 24114 main.go:141] libmachine: (multinode-858631-m03) Waiting to get IP...
I0224 01:03:33.536096 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.536497 24114 main.go:141] libmachine: (multinode-858631-m03) Found IP for machine: 192.168.39.240
I0224 01:03:33.536523 24114 main.go:141] libmachine: (multinode-858631-m03) Reserving static IP address...
I0224 01:03:33.536541 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has current primary IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.537088 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:33.537115 24114 main.go:141] libmachine: (multinode-858631-m03) Reserved static IP address: 192.168.39.240
I0224 01:03:33.537132 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | skip adding static IP to network mk-multinode-858631 - found existing host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"}
I0224 01:03:33.537149 24114 main.go:141] libmachine: (multinode-858631-m03) Waiting for SSH to be available...
I0224 01:03:33.537176 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Getting to WaitForSSH function...
I0224 01:03:33.539150 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.539490 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:33.539546 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.539598 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH client type: external
I0224 01:03:33.539625 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa (-rw-------)
I0224 01:03:33.539663 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0224 01:03:33.539685 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | About to run SSH command:
I0224 01:03:33.539701 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | exit 0
I0224 01:03:45.637148 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | SSH cmd err, output: <nil>:
I0224 01:03:45.637538 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetConfigRaw
I0224 01:03:45.638256 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:45.640589 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.640953 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.640999 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.641231 24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:03:45.641390 24114 machine.go:88] provisioning docker machine ...
I0224 01:03:45.641406 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:45.641606 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.641772 24114 buildroot.go:166] provisioning hostname "multinode-858631-m03"
I0224 01:03:45.641789 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.641914 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.644042 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.644382 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.644408 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.644536 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:45.644688 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.644813 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.644897 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:45.645054 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:45.645540 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:45.645558 24114 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-858631-m03 && echo "multinode-858631-m03" | sudo tee /etc/hostname
I0224 01:03:45.781731 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631-m03
I0224 01:03:45.781763 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.784434 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.784871 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.784902 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.785048 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:45.785217 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.785340 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.785461 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:45.785613 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:45.786019 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:45.786037 24114 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-858631-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-858631-m03' | sudo tee -a /etc/hosts;
fi
fi
I0224 01:03:45.909630 24114 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 01:03:45.909671 24114 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
I0224 01:03:45.909708 24114 buildroot.go:174] setting up certificates
I0224 01:03:45.909717 24114 provision.go:83] configureAuth start
I0224 01:03:45.909726 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.909926 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:45.912640 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.912983 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.913005 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.913176 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.915338 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.915697 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.915738 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.915804 24114 provision.go:138] copyHostCerts
I0224 01:03:45.915872 24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
I0224 01:03:45.915893 24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:03:45.915970 24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
I0224 01:03:45.916080 24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
I0224 01:03:45.916091 24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:03:45.916128 24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
I0224 01:03:45.916214 24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
I0224 01:03:45.916224 24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:03:45.916256 24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
I0224 01:03:45.916320 24114 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631-m03 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-858631-m03]
I0224 01:03:46.139019 24114 provision.go:172] copyRemoteCerts
I0224 01:03:46.139085 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 01:03:46.139111 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.141414 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.141764 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.141802 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.142019 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.142249 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.142398 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.142564 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:46.230066 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0224 01:03:46.252894 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0224 01:03:46.275597 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0224 01:03:46.297736 24114 provision.go:86] duration metric: configureAuth took 388.009002ms
I0224 01:03:46.297760 24114 buildroot.go:189] setting minikube options for container-runtime
I0224 01:03:46.297955 24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:03:46.297975 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:46.298205 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.300343 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.300707 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.300726 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.300874 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.300999 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.301111 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.301213 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.301379 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.301823 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.301836 24114 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0224 01:03:46.419110 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0224 01:03:46.419136 24114 buildroot.go:70] root file system type: tmpfs
I0224 01:03:46.419278 24114 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 01:03:46.419300 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.421650 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.422035 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.422074 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.422208 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.422385 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.422503 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.422600 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.422780 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.423174 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.423237 24114 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 01:03:46.549929 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0224 01:03:46.549963 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.552418 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.552729 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.552756 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.552886 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.553084 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.553255 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.553414 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.553596 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.554026 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.554049 24114 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0224 01:03:47.346622 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0224 01:03:47.346647 24114 machine.go:91] provisioned docker machine in 1.705245446s
I0224 01:03:47.346658 24114 start.go:300] post-start starting for "multinode-858631-m03" (driver="kvm2")
I0224 01:03:47.346666 24114 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 01:03:47.346689 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.346962 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 01:03:47.346988 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.349581 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.349992 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.350017 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.350172 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.350362 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.350549 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.350700 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.439110 24114 ssh_runner.go:195] Run: cat /etc/os-release
I0224 01:03:47.443190 24114 info.go:137] Remote host: Buildroot 2021.02.12
I0224 01:03:47.443208 24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
I0224 01:03:47.443272 24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
I0224 01:03:47.443345 24114 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
I0224 01:03:47.443425 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 01:03:47.451605 24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:03:47.477354 24114 start.go:303] post-start completed in 130.684659ms
I0224 01:03:47.477374 24114 fix.go:57] fixHost completed within 15.192824116s
I0224 01:03:47.477397 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.480041 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.480408 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.480432 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.480585 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.480775 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.480910 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.481050 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.481200 24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:47.481636 24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:47.481650 24114 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0224 01:03:47.598085 24114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200627.547433994
I0224 01:03:47.598107 24114 fix.go:207] guest clock: 1677200627.547433994
I0224 01:03:47.598117 24114 fix.go:220] Guest: 2023-02-24 01:03:47.547433994 +0000 UTC Remote: 2023-02-24 01:03:47.477378328 +0000 UTC m=+15.254967977 (delta=70.055666ms)
I0224 01:03:47.598162 24114 fix.go:191] guest clock delta is within tolerance: 70.055666ms
I0224 01:03:47.598169 24114 start.go:83] releasing machines lock for "multinode-858631-m03", held for 15.313644124s
I0224 01:03:47.598196 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.598466 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:47.601108 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.601449 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.601500 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.601635 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602113 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602297 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602410 24114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 01:03:47.602454 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.602579 24114 ssh_runner.go:195] Run: systemctl --version
I0224 01:03:47.602608 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.604986 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605313 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.605341 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605437 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605620 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.605810 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.605871 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.605901 24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605962 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.606130 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.606155 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.606285 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.606403 24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.606501 24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.698434 24114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0224 01:03:47.720616 24114 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0224 01:03:47.720682 24114 ssh_runner.go:195] Run: which cri-dockerd
I0224 01:03:47.724288 24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0224 01:03:47.733533 24114 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0224 01:03:47.749554 24114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 01:03:47.765014 24114 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0224 01:03:47.765035 24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:03:47.765118 24114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:03:47.791918 24114 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0224 01:03:47.791947 24114 docker.go:560] Images already preloaded, skipping extraction
I0224 01:03:47.791955 24114 start.go:485] detecting cgroup driver to use...
I0224 01:03:47.792040 24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:03:47.809788 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0224 01:03:47.819164 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 01:03:47.829096 24114 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 01:03:47.829135 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 01:03:47.839183 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:03:47.849277 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 01:03:47.859033 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:03:47.869193 24114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 01:03:47.879734 24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 01:03:47.890162 24114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 01:03:47.899715 24114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 01:03:47.908689 24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:48.018034 24114 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 01:03:48.035589 24114 start.go:485] detecting cgroup driver to use...
I0224 01:03:48.035666 24114 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0224 01:03:48.052927 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:03:48.073823 24114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0224 01:03:48.093543 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:03:48.106104 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:03:48.118539 24114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 01:03:48.148341 24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:03:48.160647 24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:03:48.180346 24114 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0224 01:03:48.302273 24114 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0224 01:03:48.409590 24114 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0224 01:03:48.409616 24114 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0224 01:03:48.426383 24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:48.530953 24114 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:03:49.936371 24114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405376904s)
I0224 01:03:49.936434 24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:03:50.053226 24114 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0224 01:03:50.173114 24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:03:50.268183 24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:50.380907 24114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0224 01:03:50.398249 24114 out.go:177]
W0224 01:03:50.399656 24114 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0224 01:03:50.399672 24114 out.go:239] *
*
W0224 01:03:50.402548 24114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0224 01:03:50.403866 24114 out.go:177]
multinode_test.go:255: node start returned an error. args "out/minikube-linux-amd64 -p multinode-858631 node start m03 --alsologtostderr": exit status 90
multinode_test.go:259: (dbg) Run: out/minikube-linux-amd64 -p multinode-858631 status
multinode_test.go:259: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 status: exit status 2 (552.605592ms)
-- stdout --
multinode-858631
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-858631-m02
type: Worker
host: Running
kubelet: Running
multinode-858631-m03
type: Worker
host: Running
kubelet: Stopped
-- /stdout --
multinode_test.go:261: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-858631 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-858631 -n multinode-858631
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-858631 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-858631 logs -n 25: (1.137769573s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-858631 cp multinode-858631:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03:/home/docker/cp-test_multinode-858631_multinode-858631-m03.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n multinode-858631-m03 sudo cat | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /home/docker/cp-test_multinode-858631_multinode-858631-m03.txt | | | | | |
| cp | multinode-858631 cp testdata/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m02:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631-m02.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631:/home/docker/cp-test_multinode-858631-m02_multinode-858631.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n multinode-858631 sudo cat | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /home/docker/cp-test_multinode-858631-m02_multinode-858631.txt | | | | | |
| cp | multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03:/home/docker/cp-test_multinode-858631-m02_multinode-858631-m03.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n multinode-858631-m03 sudo cat | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /home/docker/cp-test_multinode-858631-m02_multinode-858631-m03.txt | | | | | |
| cp | multinode-858631 cp testdata/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631-m03.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631:/home/docker/cp-test_multinode-858631-m03_multinode-858631.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n multinode-858631 sudo cat | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /home/docker/cp-test_multinode-858631-m03_multinode-858631.txt | | | | | |
| cp | multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m02:/home/docker/cp-test_multinode-858631-m03_multinode-858631-m02.txt | | | | | |
| ssh | multinode-858631 ssh -n | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | multinode-858631-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-858631 ssh -n multinode-858631-m02 sudo cat | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| | /home/docker/cp-test_multinode-858631-m03_multinode-858631-m02.txt | | | | | |
| node | multinode-858631 node stop m03 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
| node | multinode-858631 node start | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | |
| | m03 --alsologtostderr | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/24 01:00:07
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0224 01:00:07.922860 21922 out.go:296] Setting OutFile to fd 1 ...
I0224 01:00:07.923056 21922 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:00:07.923066 21922 out.go:309] Setting ErrFile to fd 2...
I0224 01:00:07.923073 21922 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:00:07.923190 21922 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
I0224 01:00:07.923759 21922 out.go:303] Setting JSON to false
I0224 01:00:07.924632 21922 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2557,"bootTime":1677197851,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0224 01:00:07.924691 21922 start.go:135] virtualization: kvm guest
I0224 01:00:07.927314 21922 out.go:177] * [multinode-858631] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0224 01:00:07.929106 21922 out.go:177] - MINIKUBE_LOCATION=15909
I0224 01:00:07.929051 21922 notify.go:220] Checking for updates...
I0224 01:00:07.930542 21922 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0224 01:00:07.932177 21922 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:00:07.933715 21922 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
I0224 01:00:07.935104 21922 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0224 01:00:07.936519 21922 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0224 01:00:07.937943 21922 driver.go:365] Setting default libvirt URI to qemu:///system
I0224 01:00:07.972305 21922 out.go:177] * Using the kvm2 driver based on user configuration
I0224 01:00:07.973594 21922 start.go:296] selected driver: kvm2
I0224 01:00:07.973608 21922 start.go:857] validating driver "kvm2" against <nil>
I0224 01:00:07.973618 21922 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0224 01:00:07.974205 21922 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 01:00:07.974270 21922 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-4074/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0224 01:00:07.988124 21922 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0224 01:00:07.988170 21922 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0224 01:00:07.988380 21922 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0224 01:00:07.988411 21922 cni.go:84] Creating CNI manager for ""
I0224 01:00:07.988423 21922 cni.go:136] 0 nodes found, recommending kindnet
I0224 01:00:07.988433 21922 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
I0224 01:00:07.988452 21922 start_flags.go:319] config:
{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 01:00:07.988547 21922 iso.go:125] acquiring lock: {Name:mkc3d6185dc03bdb5dc9fb9cd39dd085e0eef640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 01:00:07.990401 21922 out.go:177] * Starting control plane node multinode-858631 in cluster multinode-858631
I0224 01:00:07.991675 21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:00:07.991700 21922 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0224 01:00:07.991715 21922 cache.go:57] Caching tarball of preloaded images
I0224 01:00:07.991784 21922 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0224 01:00:07.991794 21922 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0224 01:00:07.992091 21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:00:07.992110 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json: {Name:mkc2f0838e41fb815d83b476363d0d2dba762f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:07.992221 21922 cache.go:193] Successfully downloaded all kic artifacts
I0224 01:00:07.992241 21922 start.go:364] acquiring machines lock for multinode-858631: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0224 01:00:07.992265 21922 start.go:368] acquired machines lock for "multinode-858631" in 14.484µs
I0224 01:00:07.992282 21922 start.go:93] Provisioning new machine with config: &{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0224 01:00:07.992341 21922 start.go:125] createHost starting for "" (driver="kvm2")
I0224 01:00:07.994145 21922 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0224 01:00:07.994259 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:00:07.994296 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:00:08.007604 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
I0224 01:00:08.008010 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:00:08.009823 21922 main.go:141] libmachine: Using API Version 1
I0224 01:00:08.009850 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:00:08.010160 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:00:08.010346 21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
I0224 01:00:08.010472 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:08.010601 21922 start.go:159] libmachine.API.Create for "multinode-858631" (driver="kvm2")
I0224 01:00:08.010626 21922 client.go:168] LocalClient.Create starting
I0224 01:00:08.010656 21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem
I0224 01:00:08.010682 21922 main.go:141] libmachine: Decoding PEM data...
I0224 01:00:08.010697 21922 main.go:141] libmachine: Parsing certificate...
I0224 01:00:08.010742 21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem
I0224 01:00:08.010761 21922 main.go:141] libmachine: Decoding PEM data...
I0224 01:00:08.010777 21922 main.go:141] libmachine: Parsing certificate...
I0224 01:00:08.010794 21922 main.go:141] libmachine: Running pre-create checks...
I0224 01:00:08.010803 21922 main.go:141] libmachine: (multinode-858631) Calling .PreCreateCheck
I0224 01:00:08.011056 21922 main.go:141] libmachine: (multinode-858631) Calling .GetConfigRaw
I0224 01:00:08.011423 21922 main.go:141] libmachine: Creating machine...
I0224 01:00:08.011437 21922 main.go:141] libmachine: (multinode-858631) Calling .Create
I0224 01:00:08.011546 21922 main.go:141] libmachine: (multinode-858631) Creating KVM machine...
I0224 01:00:08.012611 21922 main.go:141] libmachine: (multinode-858631) DBG | found existing default KVM network
I0224 01:00:08.013199 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.013088 21944 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029240}
I0224 01:00:08.018125 21922 main.go:141] libmachine: (multinode-858631) DBG | trying to create private KVM network mk-multinode-858631 192.168.39.0/24...
I0224 01:00:08.082681 21922 main.go:141] libmachine: (multinode-858631) DBG | private KVM network mk-multinode-858631 192.168.39.0/24 created
I0224 01:00:08.082717 21922 main.go:141] libmachine: (multinode-858631) Setting up store path in /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631 ...
I0224 01:00:08.082733 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.082648 21944 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15909-4074/.minikube
I0224 01:00:08.082754 21922 main.go:141] libmachine: (multinode-858631) Building disk image from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
I0224 01:00:08.082824 21922 main.go:141] libmachine: (multinode-858631) Downloading /home/jenkins/minikube-integration/15909-4074/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
I0224 01:00:08.280134 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.280031 21944 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa...
I0224 01:00:08.321431 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.321350 21944 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/multinode-858631.rawdisk...
I0224 01:00:08.321458 21922 main.go:141] libmachine: (multinode-858631) DBG | Writing magic tar header
I0224 01:00:08.321486 21922 main.go:141] libmachine: (multinode-858631) DBG | Writing SSH key tar header
I0224 01:00:08.321556 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.321463 21944 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631 ...
I0224 01:00:08.321580 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631
I0224 01:00:08.321600 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines
I0224 01:00:08.321630 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube
I0224 01:00:08.321646 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074
I0224 01:00:08.321663 21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631 (perms=drwx------)
I0224 01:00:08.321679 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0224 01:00:08.321695 21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines (perms=drwxrwxr-x)
I0224 01:00:08.321709 21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube (perms=drwxr-xr-x)
I0224 01:00:08.321724 21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074 (perms=drwxrwxr-x)
I0224 01:00:08.321740 21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0224 01:00:08.321755 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins
I0224 01:00:08.321772 21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0224 01:00:08.321782 21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home
I0224 01:00:08.321796 21922 main.go:141] libmachine: (multinode-858631) DBG | Skipping /home - not owner
I0224 01:00:08.321810 21922 main.go:141] libmachine: (multinode-858631) Creating domain...
I0224 01:00:08.322759 21922 main.go:141] libmachine: (multinode-858631) define libvirt domain using xml:
I0224 01:00:08.322783 21922 main.go:141] libmachine: (multinode-858631) <domain type='kvm'>
I0224 01:00:08.322792 21922 main.go:141] libmachine: (multinode-858631) <name>multinode-858631</name>
I0224 01:00:08.322801 21922 main.go:141] libmachine: (multinode-858631) <memory unit='MiB'>2200</memory>
I0224 01:00:08.322810 21922 main.go:141] libmachine: (multinode-858631) <vcpu>2</vcpu>
I0224 01:00:08.322817 21922 main.go:141] libmachine: (multinode-858631) <features>
I0224 01:00:08.322826 21922 main.go:141] libmachine: (multinode-858631) <acpi/>
I0224 01:00:08.322831 21922 main.go:141] libmachine: (multinode-858631) <apic/>
I0224 01:00:08.322839 21922 main.go:141] libmachine: (multinode-858631) <pae/>
I0224 01:00:08.322849 21922 main.go:141] libmachine: (multinode-858631)
I0224 01:00:08.322859 21922 main.go:141] libmachine: (multinode-858631) </features>
I0224 01:00:08.322872 21922 main.go:141] libmachine: (multinode-858631) <cpu mode='host-passthrough'>
I0224 01:00:08.322880 21922 main.go:141] libmachine: (multinode-858631)
I0224 01:00:08.322885 21922 main.go:141] libmachine: (multinode-858631) </cpu>
I0224 01:00:08.322912 21922 main.go:141] libmachine: (multinode-858631) <os>
I0224 01:00:08.322936 21922 main.go:141] libmachine: (multinode-858631) <type>hvm</type>
I0224 01:00:08.322955 21922 main.go:141] libmachine: (multinode-858631) <boot dev='cdrom'/>
I0224 01:00:08.322966 21922 main.go:141] libmachine: (multinode-858631) <boot dev='hd'/>
I0224 01:00:08.322989 21922 main.go:141] libmachine: (multinode-858631) <bootmenu enable='no'/>
I0224 01:00:08.323001 21922 main.go:141] libmachine: (multinode-858631) </os>
I0224 01:00:08.323011 21922 main.go:141] libmachine: (multinode-858631) <devices>
I0224 01:00:08.323027 21922 main.go:141] libmachine: (multinode-858631) <disk type='file' device='cdrom'>
I0224 01:00:08.323106 21922 main.go:141] libmachine: (multinode-858631) <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/boot2docker.iso'/>
I0224 01:00:08.323139 21922 main.go:141] libmachine: (multinode-858631) <target dev='hdc' bus='scsi'/>
I0224 01:00:08.323157 21922 main.go:141] libmachine: (multinode-858631) <readonly/>
I0224 01:00:08.323170 21922 main.go:141] libmachine: (multinode-858631) </disk>
I0224 01:00:08.323187 21922 main.go:141] libmachine: (multinode-858631) <disk type='file' device='disk'>
I0224 01:00:08.323203 21922 main.go:141] libmachine: (multinode-858631) <driver name='qemu' type='raw' cache='default' io='threads' />
I0224 01:00:08.323221 21922 main.go:141] libmachine: (multinode-858631) <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/multinode-858631.rawdisk'/>
I0224 01:00:08.323237 21922 main.go:141] libmachine: (multinode-858631) <target dev='hda' bus='virtio'/>
I0224 01:00:08.323250 21922 main.go:141] libmachine: (multinode-858631) </disk>
I0224 01:00:08.323265 21922 main.go:141] libmachine: (multinode-858631) <interface type='network'>
I0224 01:00:08.323279 21922 main.go:141] libmachine: (multinode-858631) <source network='mk-multinode-858631'/>
I0224 01:00:08.323292 21922 main.go:141] libmachine: (multinode-858631) <model type='virtio'/>
I0224 01:00:08.323307 21922 main.go:141] libmachine: (multinode-858631) </interface>
I0224 01:00:08.323320 21922 main.go:141] libmachine: (multinode-858631) <interface type='network'>
I0224 01:00:08.323334 21922 main.go:141] libmachine: (multinode-858631) <source network='default'/>
I0224 01:00:08.323345 21922 main.go:141] libmachine: (multinode-858631) <model type='virtio'/>
I0224 01:00:08.323359 21922 main.go:141] libmachine: (multinode-858631) </interface>
I0224 01:00:08.323371 21922 main.go:141] libmachine: (multinode-858631) <serial type='pty'>
I0224 01:00:08.323413 21922 main.go:141] libmachine: (multinode-858631) <target port='0'/>
I0224 01:00:08.323438 21922 main.go:141] libmachine: (multinode-858631) </serial>
I0224 01:00:08.323453 21922 main.go:141] libmachine: (multinode-858631) <console type='pty'>
I0224 01:00:08.323467 21922 main.go:141] libmachine: (multinode-858631) <target type='serial' port='0'/>
I0224 01:00:08.323481 21922 main.go:141] libmachine: (multinode-858631) </console>
I0224 01:00:08.323493 21922 main.go:141] libmachine: (multinode-858631) <rng model='virtio'>
I0224 01:00:08.323506 21922 main.go:141] libmachine: (multinode-858631) <backend model='random'>/dev/random</backend>
I0224 01:00:08.323516 21922 main.go:141] libmachine: (multinode-858631) </rng>
I0224 01:00:08.323522 21922 main.go:141] libmachine: (multinode-858631)
I0224 01:00:08.323529 21922 main.go:141] libmachine: (multinode-858631)
I0224 01:00:08.323539 21922 main.go:141] libmachine: (multinode-858631) </devices>
I0224 01:00:08.323547 21922 main.go:141] libmachine: (multinode-858631) </domain>
I0224 01:00:08.323554 21922 main.go:141] libmachine: (multinode-858631)
I0224 01:00:08.327792 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:ea:23:eb in network default
I0224 01:00:08.328394 21922 main.go:141] libmachine: (multinode-858631) Ensuring networks are active...
I0224 01:00:08.328414 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:08.329011 21922 main.go:141] libmachine: (multinode-858631) Ensuring network default is active
I0224 01:00:08.329257 21922 main.go:141] libmachine: (multinode-858631) Ensuring network mk-multinode-858631 is active
I0224 01:00:08.329759 21922 main.go:141] libmachine: (multinode-858631) Getting domain xml...
I0224 01:00:08.330421 21922 main.go:141] libmachine: (multinode-858631) Creating domain...
I0224 01:00:09.542870 21922 main.go:141] libmachine: (multinode-858631) Waiting to get IP...
I0224 01:00:09.543702 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:09.544117 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:09.544177 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:09.544117 21944 retry.go:31] will retry after 287.452956ms: waiting for machine to come up
I0224 01:00:09.833774 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:09.834253 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:09.834281 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:09.834215 21944 retry.go:31] will retry after 273.07846ms: waiting for machine to come up
I0224 01:00:10.108537 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:10.108935 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:10.108967 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:10.108868 21944 retry.go:31] will retry after 375.690347ms: waiting for machine to come up
I0224 01:00:10.486312 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:10.486717 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:10.486744 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:10.486664 21944 retry.go:31] will retry after 536.69253ms: waiting for machine to come up
I0224 01:00:11.025320 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:11.025808 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:11.025857 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:11.025757 21944 retry.go:31] will retry after 478.181904ms: waiting for machine to come up
I0224 01:00:11.505306 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:11.505791 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:11.505831 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:11.505730 21944 retry.go:31] will retry after 832.674291ms: waiting for machine to come up
I0224 01:00:12.339590 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:12.339985 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:12.340008 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:12.339946 21944 retry.go:31] will retry after 979.085118ms: waiting for machine to come up
I0224 01:00:13.320588 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:13.320998 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:13.321025 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:13.320951 21944 retry.go:31] will retry after 1.324498058s: waiting for machine to come up
I0224 01:00:14.647576 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:14.648036 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:14.648065 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:14.647998 21944 retry.go:31] will retry after 1.26767628s: waiting for machine to come up
I0224 01:00:15.916908 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:15.917321 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:15.917351 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:15.917277 21944 retry.go:31] will retry after 2.091389937s: waiting for machine to come up
I0224 01:00:18.010032 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:18.010458 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:18.010496 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:18.010410 21944 retry.go:31] will retry after 2.648687931s: waiting for machine to come up
I0224 01:00:20.662372 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:20.662826 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:20.662889 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:20.662777 21944 retry.go:31] will retry after 2.698111279s: waiting for machine to come up
I0224 01:00:23.362043 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:23.362471 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:23.362500 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:23.362421 21944 retry.go:31] will retry after 3.027915498s: waiting for machine to come up
I0224 01:00:26.391429 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:26.391846 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
I0224 01:00:26.391874 21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:26.391803 21944 retry.go:31] will retry after 3.726786776s: waiting for machine to come up
I0224 01:00:30.121111 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.121498 21922 main.go:141] libmachine: (multinode-858631) Found IP for machine: 192.168.39.217
I0224 01:00:30.121529 21922 main.go:141] libmachine: (multinode-858631) Reserving static IP address...
I0224 01:00:30.121545 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has current primary IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.121936 21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find host DHCP lease matching {name: "multinode-858631", mac: "52:54:00:96:ba:53", ip: "192.168.39.217"} in network mk-multinode-858631
I0224 01:00:30.190556 21922 main.go:141] libmachine: (multinode-858631) DBG | Getting to WaitForSSH function...
I0224 01:00:30.190591 21922 main.go:141] libmachine: (multinode-858631) Reserved static IP address: 192.168.39.217
I0224 01:00:30.190604 21922 main.go:141] libmachine: (multinode-858631) Waiting for SSH to be available...
I0224 01:00:30.192817 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.193150 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.193180 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.193311 21922 main.go:141] libmachine: (multinode-858631) DBG | Using SSH client type: external
I0224 01:00:30.193350 21922 main.go:141] libmachine: (multinode-858631) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa (-rw-------)
I0224 01:00:30.193384 21922 main.go:141] libmachine: (multinode-858631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa -p 22] /usr/bin/ssh <nil>}
I0224 01:00:30.193401 21922 main.go:141] libmachine: (multinode-858631) DBG | About to run SSH command:
I0224 01:00:30.193436 21922 main.go:141] libmachine: (multinode-858631) DBG | exit 0
I0224 01:00:30.288864 21922 main.go:141] libmachine: (multinode-858631) DBG | SSH cmd err, output: <nil>:
I0224 01:00:30.289137 21922 main.go:141] libmachine: (multinode-858631) KVM machine creation complete!
I0224 01:00:30.289430 21922 main.go:141] libmachine: (multinode-858631) Calling .GetConfigRaw
I0224 01:00:30.289976 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:30.290153 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:30.290321 21922 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0224 01:00:30.290336 21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
I0224 01:00:30.291526 21922 main.go:141] libmachine: Detecting operating system of created instance...
I0224 01:00:30.291540 21922 main.go:141] libmachine: Waiting for SSH to be available...
I0224 01:00:30.291545 21922 main.go:141] libmachine: Getting to WaitForSSH function...
I0224 01:00:30.291551 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:30.293860 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.294219 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.294249 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.294386 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:30.294544 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.294672 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.294802 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:30.294950 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:30.295359 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:30.295371 21922 main.go:141] libmachine: About to run SSH command:
exit 0
I0224 01:00:30.420171 21922 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 01:00:30.420190 21922 main.go:141] libmachine: Detecting the provisioner...
I0224 01:00:30.420197 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:30.422737 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.423108 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.423136 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.423267 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:30.423458 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.423597 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.423739 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:30.423884 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:30.424275 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:30.424287 21922 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0224 01:00:30.549909 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g41e8300-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0224 01:00:30.549975 21922 main.go:141] libmachine: found compatible host: buildroot
I0224 01:00:30.549989 21922 main.go:141] libmachine: Provisioning with buildroot...
I0224 01:00:30.550000 21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
I0224 01:00:30.550269 21922 buildroot.go:166] provisioning hostname "multinode-858631"
I0224 01:00:30.550293 21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
I0224 01:00:30.550475 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:30.552822 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.553097 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.553124 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.553249 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:30.553417 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.553588 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.553701 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:30.553839 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:30.554225 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:30.554239 21922 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-858631 && echo "multinode-858631" | sudo tee /etc/hostname
I0224 01:00:30.693960 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631
I0224 01:00:30.693988 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:30.696773 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.697120 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.697152 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.697330 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:30.697511 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.697665 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.697809 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:30.697941 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:30.698385 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:30.698405 21922 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-858631' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631/g' /etc/hosts;
else
echo '127.0.1.1 multinode-858631' | sudo tee -a /etc/hosts;
fi
fi
I0224 01:00:30.833828 21922 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 01:00:30.833864 21922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
I0224 01:00:30.833907 21922 buildroot.go:174] setting up certificates
I0224 01:00:30.833922 21922 provision.go:83] configureAuth start
I0224 01:00:30.833940 21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
I0224 01:00:30.834224 21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
I0224 01:00:30.836812 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.837162 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.837191 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.837314 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:30.839548 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.839897 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.839920 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.839995 21922 provision.go:138] copyHostCerts
I0224 01:00:30.840033 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:00:30.840074 21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
I0224 01:00:30.840085 21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:00:30.840143 21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
I0224 01:00:30.840230 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:00:30.840250 21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
I0224 01:00:30.840258 21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:00:30.840284 21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
I0224 01:00:30.840357 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:00:30.840377 21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
I0224 01:00:30.840382 21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:00:30.840402 21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
I0224 01:00:30.840450 21922 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube multinode-858631]
I0224 01:00:30.983124 21922 provision.go:172] copyRemoteCerts
I0224 01:00:30.983169 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 01:00:30.983190 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:30.985644 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.985953 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:30.985982 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:30.986131 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:30.986312 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:30.986474 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:30.986605 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:00:31.081588 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem -> /etc/docker/server.pem
I0224 01:00:31.081671 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0224 01:00:31.103684 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0224 01:00:31.103738 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0224 01:00:31.125916 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0224 01:00:31.125976 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0224 01:00:31.148440 21922 provision.go:86] duration metric: configureAuth took 314.504412ms
I0224 01:00:31.148459 21922 buildroot.go:189] setting minikube options for container-runtime
I0224 01:00:31.148629 21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:00:31.148652 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:31.148893 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:31.151098 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:31.151447 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:31.151474 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:31.151613 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:31.151787 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:31.151961 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:31.152107 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:31.152279 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:31.152719 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:31.152733 21922 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0224 01:00:31.283355 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0224 01:00:31.283381 21922 buildroot.go:70] root file system type: tmpfs
I0224 01:00:31.283501 21922 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 01:00:31.283530 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:31.286213 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:31.286507 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:31.286526 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:31.286697 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:31.286883 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:31.287047 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:31.287198 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:31.287357 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:31.287788 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:31.287859 21922 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 01:00:31.425437 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0224 01:00:31.425499 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:31.427865 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:31.428169 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:31.428192 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:31.428349 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:31.428522 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:31.428655 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:31.428784 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:31.428921 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:31.429304 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:31.429322 21922 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0224 01:00:32.179405 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0224 01:00:32.179427 21922 main.go:141] libmachine: Checking connection to Docker...
I0224 01:00:32.179435 21922 main.go:141] libmachine: (multinode-858631) Calling .GetURL
I0224 01:00:32.180653 21922 main.go:141] libmachine: (multinode-858631) DBG | Using libvirt version 6000000
I0224 01:00:32.183228 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.183562 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.183591 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.183746 21922 main.go:141] libmachine: Docker is up and running!
I0224 01:00:32.183761 21922 main.go:141] libmachine: Reticulating splines...
I0224 01:00:32.183769 21922 client.go:171] LocalClient.Create took 24.173132801s
I0224 01:00:32.183791 21922 start.go:167] duration metric: libmachine.API.Create for "multinode-858631" took 24.173190525s
I0224 01:00:32.183802 21922 start.go:300] post-start starting for "multinode-858631" (driver="kvm2")
I0224 01:00:32.183807 21922 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 01:00:32.183827 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:32.184063 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 01:00:32.184087 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:32.186279 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.186573 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.186604 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.186732 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:32.186952 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:32.187107 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:32.187244 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:00:32.278374 21922 ssh_runner.go:195] Run: cat /etc/os-release
I0224 01:00:32.282760 21922 command_runner.go:130] > NAME=Buildroot
I0224 01:00:32.282783 21922 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
I0224 01:00:32.282788 21922 command_runner.go:130] > ID=buildroot
I0224 01:00:32.282793 21922 command_runner.go:130] > VERSION_ID=2021.02.12
I0224 01:00:32.282798 21922 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0224 01:00:32.282828 21922 info.go:137] Remote host: Buildroot 2021.02.12
I0224 01:00:32.282843 21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
I0224 01:00:32.282918 21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
I0224 01:00:32.283013 21922 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
I0224 01:00:32.283026 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /etc/ssl/certs/111312.pem
I0224 01:00:32.283100 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 01:00:32.291259 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:00:32.314029 21922 start.go:303] post-start completed in 130.213564ms
I0224 01:00:32.314071 21922 main.go:141] libmachine: (multinode-858631) Calling .GetConfigRaw
I0224 01:00:32.314587 21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
I0224 01:00:32.316743 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.317060 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.317090 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.317279 21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:00:32.317451 21922 start.go:128] duration metric: createHost completed in 24.325103397s
I0224 01:00:32.317493 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:32.319467 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.319758 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.319785 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.319927 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:32.320117 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:32.320246 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:32.320378 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:32.320517 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:00:32.320897 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0224 01:00:32.320908 21922 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0224 01:00:32.449766 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200432.433992026
I0224 01:00:32.449792 21922 fix.go:207] guest clock: 1677200432.433992026
I0224 01:00:32.449804 21922 fix.go:220] Guest: 2023-02-24 01:00:32.433992026 +0000 UTC Remote: 2023-02-24 01:00:32.317464505 +0000 UTC m=+24.432912270 (delta=116.527521ms)
I0224 01:00:32.449830 21922 fix.go:191] guest clock delta is within tolerance: 116.527521ms
I0224 01:00:32.449837 21922 start.go:83] releasing machines lock for "multinode-858631", held for 24.457561476s
I0224 01:00:32.449860 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:32.450137 21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
I0224 01:00:32.452532 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.452856 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.452895 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.453048 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:32.453653 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:32.453804 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:00:32.453885 21922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 01:00:32.453924 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:32.453963 21922 ssh_runner.go:195] Run: cat /version.json
I0224 01:00:32.453982 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:00:32.457509 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.457610 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.457892 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.457947 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:32.457971 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.457989 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:32.458118 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:32.458210 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:00:32.458343 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:32.458412 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:00:32.458495 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:32.458562 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:00:32.458629 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:00:32.458686 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:00:32.566472 21922 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0224 01:00:32.567132 21922 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
I0224 01:00:32.567237 21922 ssh_runner.go:195] Run: systemctl --version
I0224 01:00:32.572390 21922 command_runner.go:130] > systemd 247 (247)
I0224 01:00:32.572410 21922 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0224 01:00:32.572692 21922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0224 01:00:32.577614 21922 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0224 01:00:32.577810 21922 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0224 01:00:32.577861 21922 ssh_runner.go:195] Run: which cri-dockerd
I0224 01:00:32.581074 21922 command_runner.go:130] > /usr/bin/cri-dockerd
I0224 01:00:32.581156 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0224 01:00:32.589028 21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0224 01:00:32.604136 21922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 01:00:32.618439 21922 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0224 01:00:32.618464 21922 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0224 01:00:32.618473 21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:00:32.618552 21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:00:32.645414 21922 docker.go:630] Got preloaded images:
I0224 01:00:32.645434 21922 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
I0224 01:00:32.645487 21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0224 01:00:32.653967 21922 command_runner.go:139] > {"Repositories":{}}
I0224 01:00:32.654083 21922 ssh_runner.go:195] Run: which lz4
I0224 01:00:32.657537 21922 command_runner.go:130] > /usr/bin/lz4
I0224 01:00:32.657561 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0224 01:00:32.657623 21922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0224 01:00:32.661229 21922 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0224 01:00:32.661457 21922 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0224 01:00:32.661489 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
I0224 01:00:34.312007 21922 docker.go:594] Took 1.654399 seconds to copy over tarball
I0224 01:00:34.312064 21922 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0224 01:00:36.924173 21922 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.612079648s)
I0224 01:00:36.924227 21922 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0224 01:00:36.960913 21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0224 01:00:36.970095 21922 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.3":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.6-0":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c":"sha256:fce326961ae2d51a5f726883fd59d
2a8c2ccc3e45d3bb859882db58e422e59e7"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed0
3c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
I0224 01:00:36.970254 21922 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I0224 01:00:36.987660 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:00:37.096321 21922 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:00:40.441534 21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.345178102s)
I0224 01:00:40.441576 21922 start.go:485] detecting cgroup driver to use...
I0224 01:00:40.441724 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:00:40.458990 21922 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0224 01:00:40.459016 21922 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0224 01:00:40.459076 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0224 01:00:40.469201 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 01:00:40.479358 21922 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 01:00:40.479427 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 01:00:40.488595 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:00:40.497890 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 01:00:40.506619 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:00:40.515720 21922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 01:00:40.524910 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 01:00:40.533681 21922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 01:00:40.541806 21922 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0224 01:00:40.541876 21922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 01:00:40.549793 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:00:40.652205 21922 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 01:00:40.668651 21922 start.go:485] detecting cgroup driver to use...
I0224 01:00:40.668764 21922 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0224 01:00:40.682477 21922 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0224 01:00:40.682492 21922 command_runner.go:130] > [Unit]
I0224 01:00:40.682498 21922 command_runner.go:130] > Description=Docker Application Container Engine
I0224 01:00:40.682510 21922 command_runner.go:130] > Documentation=https://docs.docker.com
I0224 01:00:40.682522 21922 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0224 01:00:40.682529 21922 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0224 01:00:40.682541 21922 command_runner.go:130] > StartLimitBurst=3
I0224 01:00:40.682549 21922 command_runner.go:130] > StartLimitIntervalSec=60
I0224 01:00:40.682557 21922 command_runner.go:130] > [Service]
I0224 01:00:40.682560 21922 command_runner.go:130] > Type=notify
I0224 01:00:40.682565 21922 command_runner.go:130] > Restart=on-failure
I0224 01:00:40.682572 21922 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0224 01:00:40.682587 21922 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0224 01:00:40.682596 21922 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0224 01:00:40.682602 21922 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0224 01:00:40.682610 21922 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0224 01:00:40.682621 21922 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0224 01:00:40.682633 21922 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0224 01:00:40.682654 21922 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0224 01:00:40.682667 21922 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0224 01:00:40.682674 21922 command_runner.go:130] > ExecStart=
I0224 01:00:40.682698 21922 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0224 01:00:40.682709 21922 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0224 01:00:40.682724 21922 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0224 01:00:40.682736 21922 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0224 01:00:40.682745 21922 command_runner.go:130] > LimitNOFILE=infinity
I0224 01:00:40.682750 21922 command_runner.go:130] > LimitNPROC=infinity
I0224 01:00:40.682754 21922 command_runner.go:130] > LimitCORE=infinity
I0224 01:00:40.682762 21922 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0224 01:00:40.682771 21922 command_runner.go:130] > # Only systemd 226 and above support this version.
I0224 01:00:40.682778 21922 command_runner.go:130] > TasksMax=infinity
I0224 01:00:40.682782 21922 command_runner.go:130] > TimeoutStartSec=0
I0224 01:00:40.682791 21922 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0224 01:00:40.682796 21922 command_runner.go:130] > Delegate=yes
I0224 01:00:40.682802 21922 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0224 01:00:40.682808 21922 command_runner.go:130] > KillMode=process
I0224 01:00:40.682815 21922 command_runner.go:130] > [Install]
I0224 01:00:40.682828 21922 command_runner.go:130] > WantedBy=multi-user.target
I0224 01:00:40.682877 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:00:40.696103 21922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0224 01:00:40.713056 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:00:40.725487 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:00:40.737288 21922 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 01:00:40.765853 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:00:40.778107 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:00:40.794238 21922 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0224 01:00:40.794265 21922 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0224 01:00:40.794554 21922 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0224 01:00:40.894905 21922 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0224 01:00:40.996621 21922 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0224 01:00:40.996653 21922 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0224 01:00:41.012805 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:00:41.111187 21922 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:00:42.458510 21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.347288974s)
I0224 01:00:42.458577 21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:00:42.560401 21922 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0224 01:00:42.658143 21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:00:42.765584 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:00:42.868302 21922 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0224 01:00:42.885199 21922 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0224 01:00:42.885268 21922 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0224 01:00:42.891095 21922 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0224 01:00:42.891112 21922 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0224 01:00:42.891118 21922 command_runner.go:130] > Device: 16h/22d Inode: 898 Links: 1
I0224 01:00:42.891127 21922 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0224 01:00:42.891135 21922 command_runner.go:130] > Access: 2023-02-24 01:00:42.874787956 +0000
I0224 01:00:42.891143 21922 command_runner.go:130] > Modify: 2023-02-24 01:00:42.874787956 +0000
I0224 01:00:42.891158 21922 command_runner.go:130] > Change: 2023-02-24 01:00:42.877789446 +0000
I0224 01:00:42.891164 21922 command_runner.go:130] > Birth: -
I0224 01:00:42.891189 21922 start.go:553] Will wait 60s for crictl version
I0224 01:00:42.891244 21922 ssh_runner.go:195] Run: which crictl
I0224 01:00:42.894967 21922 command_runner.go:130] > /usr/bin/crictl
I0224 01:00:42.895105 21922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0224 01:00:42.999570 21922 command_runner.go:130] > Version: 0.1.0
I0224 01:00:42.999599 21922 command_runner.go:130] > RuntimeName: docker
I0224 01:00:42.999608 21922 command_runner.go:130] > RuntimeVersion: 20.10.23
I0224 01:00:42.999617 21922 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0224 01:00:42.999648 21922 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0224 01:00:42.999707 21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0224 01:00:43.031033 21922 command_runner.go:130] > 20.10.23
I0224 01:00:43.031201 21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0224 01:00:43.062577 21922 command_runner.go:130] > 20.10.23
I0224 01:00:43.188270 21922 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0224 01:00:43.188370 21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
I0224 01:00:43.190950 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:43.191349 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:00:43.191386 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:00:43.191576 21922 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0224 01:00:43.196168 21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 01:00:43.208448 21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:00:43.208505 21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:00:43.235549 21922 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0224 01:00:43.235573 21922 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0224 01:00:43.235581 21922 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0224 01:00:43.235589 21922 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0224 01:00:43.235597 21922 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0224 01:00:43.235604 21922 command_runner.go:130] > registry.k8s.io/pause:3.9
I0224 01:00:43.235612 21922 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0224 01:00:43.235620 21922 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0224 01:00:43.236734 21922 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0224 01:00:43.236755 21922 docker.go:560] Images already preloaded, skipping extraction
I0224 01:00:43.236808 21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:00:43.259789 21922 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0224 01:00:43.259813 21922 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0224 01:00:43.259819 21922 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0224 01:00:43.259825 21922 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0224 01:00:43.259829 21922 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0224 01:00:43.259837 21922 command_runner.go:130] > registry.k8s.io/pause:3.9
I0224 01:00:43.259842 21922 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0224 01:00:43.259854 21922 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0224 01:00:43.260930 21922 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0224 01:00:43.260945 21922 cache_images.go:84] Images are preloaded, skipping loading
I0224 01:00:43.260996 21922 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0224 01:00:43.291394 21922 command_runner.go:130] > cgroupfs
I0224 01:00:43.292593 21922 cni.go:84] Creating CNI manager for ""
I0224 01:00:43.292611 21922 cni.go:136] 1 nodes found, recommending kindnet
I0224 01:00:43.292629 21922 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0224 01:00:43.292649 21922 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-858631 NodeName:multinode-858631 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0224 01:00:43.292803 21922 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.217
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-858631"
kubeletExtraArgs:
node-ip: 192.168.39.217
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0224 01:00:43.292905 21922 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-858631 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0224 01:00:43.292960 21922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0224 01:00:43.302486 21922 command_runner.go:130] > kubeadm
I0224 01:00:43.302511 21922 command_runner.go:130] > kubectl
I0224 01:00:43.302515 21922 command_runner.go:130] > kubelet
I0224 01:00:43.302531 21922 binaries.go:44] Found k8s binaries, skipping transfer
I0224 01:00:43.302568 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0224 01:00:43.311742 21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0224 01:00:43.327962 21922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0224 01:00:43.343617 21922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
I0224 01:00:43.359576 21922 ssh_runner.go:195] Run: grep 192.168.39.217 control-plane.minikube.internal$ /etc/hosts
I0224 01:00:43.363235 21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 01:00:43.374564 21922 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631 for IP: 192.168.39.217
I0224 01:00:43.374588 21922 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:43.374731 21922 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
I0224 01:00:43.374772 21922 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
I0224 01:00:43.374825 21922 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key
I0224 01:00:43.374838 21922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt with IP's: []
I0224 01:00:43.757434 21922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt ...
I0224 01:00:43.757461 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt: {Name:mkcc0c569c9788541aab5f3223cd2b7951674618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:43.757640 21922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key ...
I0224 01:00:43.757650 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key: {Name:mk0ed358ba22663cd96c2d3cd2869c3a20fbda2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:43.757728 21922 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f
I0224 01:00:43.757741 21922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
I0224 01:00:43.973440 21922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f ...
I0224 01:00:43.973474 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f: {Name:mk00cd72cfae969b641b12281c1312aa0fbdbefe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:43.973627 21922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f ...
I0224 01:00:43.973637 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f: {Name:mkbc17d769c4cddfc5578c3ea30c376f66ff2a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:43.973703 21922 certs.go:333] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt
I0224 01:00:43.973777 21922 certs.go:337] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key
I0224 01:00:43.973824 21922 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key
I0224 01:00:43.973834 21922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt with IP's: []
I0224 01:00:44.094334 21922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt ...
I0224 01:00:44.094360 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt: {Name:mk8988f1556cd909013dfd0d62c0a8c3e8199ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:44.094504 21922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key ...
I0224 01:00:44.094514 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key: {Name:mk1be02ef2a6514ffd86117be55d5b107c276723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:00:44.094579 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0224 01:00:44.094594 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0224 01:00:44.094610 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0224 01:00:44.094622 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0224 01:00:44.094635 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0224 01:00:44.094647 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0224 01:00:44.094658 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0224 01:00:44.094671 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0224 01:00:44.094734 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem (1338 bytes)
W0224 01:00:44.094769 21922 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131_empty.pem, impossibly tiny 0 bytes
I0224 01:00:44.094778 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem (1679 bytes)
I0224 01:00:44.094802 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem (1078 bytes)
I0224 01:00:44.094825 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem (1123 bytes)
I0224 01:00:44.094846 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem (1679 bytes)
I0224 01:00:44.094884 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:00:44.094908 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem -> /usr/share/ca-certificates/11131.pem
I0224 01:00:44.094921 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /usr/share/ca-certificates/111312.pem
I0224 01:00:44.094933 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0224 01:00:44.095423 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0224 01:00:44.119456 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0224 01:00:44.141438 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0224 01:00:44.163472 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0224 01:00:44.184474 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 01:00:44.205536 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0224 01:00:44.226889 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0224 01:00:44.248308 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0224 01:00:44.269339 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem --> /usr/share/ca-certificates/11131.pem (1338 bytes)
I0224 01:00:44.290313 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /usr/share/ca-certificates/111312.pem (1708 bytes)
I0224 01:00:44.311391 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0224 01:00:44.333369 21922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0224 01:00:44.349242 21922 ssh_runner.go:195] Run: openssl version
I0224 01:00:44.354342 21922 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0224 01:00:44.354617 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11131.pem && ln -fs /usr/share/ca-certificates/11131.pem /etc/ssl/certs/11131.pem"
I0224 01:00:44.364312 21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11131.pem
I0224 01:00:44.368722 21922 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
I0224 01:00:44.368859 21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
I0224 01:00:44.368908 21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11131.pem
I0224 01:00:44.374353 21922 command_runner.go:130] > 51391683
I0224 01:00:44.374420 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11131.pem /etc/ssl/certs/51391683.0"
I0224 01:00:44.384479 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111312.pem && ln -fs /usr/share/ca-certificates/111312.pem /etc/ssl/certs/111312.pem"
I0224 01:00:44.394484 21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111312.pem
I0224 01:00:44.398697 21922 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
I0224 01:00:44.398904 21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
I0224 01:00:44.398946 21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111312.pem
I0224 01:00:44.404377 21922 command_runner.go:130] > 3ec20f2e
I0224 01:00:44.404422 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111312.pem /etc/ssl/certs/3ec20f2e.0"
I0224 01:00:44.414437 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 01:00:44.424443 21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0224 01:00:44.428582 21922 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
I0224 01:00:44.428754 21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
I0224 01:00:44.428790 21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 01:00:44.434050 21922 command_runner.go:130] > b5213941
I0224 01:00:44.434108 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 01:00:44.444029 21922 kubeadm.go:401] StartCluster: {Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 01:00:44.444176 21922 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0224 01:00:44.468715 21922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0224 01:00:44.477644 21922 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I0224 01:00:44.477667 21922 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I0224 01:00:44.477674 21922 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0224 01:00:44.477727 21922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0224 01:00:44.486752 21922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0224 01:00:44.495237 21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0224 01:00:44.495257 21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0224 01:00:44.495271 21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0224 01:00:44.495282 21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0224 01:00:44.495448 21922 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0224 01:00:44.495475 21922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0224 01:00:44.590193 21922 kubeadm.go:322] W0224 01:00:44.584870 1313 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0224 01:00:44.590215 21922 command_runner.go:130] ! W0224 01:00:44.584870 1313 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0224 01:00:44.841325 21922 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0224 01:00:44.841349 21922 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0224 01:00:59.743641 21922 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
I0224 01:00:59.743665 21922 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
I0224 01:00:59.743714 21922 kubeadm.go:322] [preflight] Running pre-flight checks
I0224 01:00:59.743745 21922 command_runner.go:130] > [preflight] Running pre-flight checks
I0224 01:00:59.743871 21922 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0224 01:00:59.743885 21922 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0224 01:00:59.744009 21922 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0224 01:00:59.744023 21922 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0224 01:00:59.744130 21922 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0224 01:00:59.744137 21922 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0224 01:00:59.744189 21922 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0224 01:00:59.746047 21922 out.go:204] - Generating certificates and keys ...
I0224 01:00:59.744264 21922 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0224 01:00:59.746123 21922 kubeadm.go:322] [certs] Using existing ca certificate authority
I0224 01:00:59.746138 21922 command_runner.go:130] > [certs] Using existing ca certificate authority
I0224 01:00:59.746201 21922 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0224 01:00:59.746212 21922 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0224 01:00:59.746316 21922 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0224 01:00:59.746327 21922 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0224 01:00:59.746412 21922 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
I0224 01:00:59.746420 21922 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0224 01:00:59.746504 21922 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
I0224 01:00:59.746514 21922 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0224 01:00:59.746582 21922 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
I0224 01:00:59.746595 21922 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0224 01:00:59.746661 21922 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
I0224 01:00:59.746667 21922 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0224 01:00:59.746803 21922 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
I0224 01:00:59.746813 21922 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
I0224 01:00:59.746884 21922 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
I0224 01:00:59.746888 21922 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0224 01:00:59.747037 21922 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
I0224 01:00:59.747049 21922 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
I0224 01:00:59.747109 21922 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0224 01:00:59.747115 21922 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0224 01:00:59.747186 21922 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
I0224 01:00:59.747197 21922 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0224 01:00:59.747264 21922 command_runner.go:130] > [certs] Generating "sa" key and public key
I0224 01:00:59.747273 21922 kubeadm.go:322] [certs] Generating "sa" key and public key
I0224 01:00:59.747342 21922 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0224 01:00:59.747350 21922 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0224 01:00:59.747415 21922 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0224 01:00:59.747422 21922 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0224 01:00:59.747474 21922 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0224 01:00:59.747480 21922 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0224 01:00:59.747552 21922 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0224 01:00:59.747559 21922 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0224 01:00:59.747620 21922 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0224 01:00:59.747627 21922 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0224 01:00:59.747749 21922 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0224 01:00:59.747758 21922 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0224 01:00:59.747870 21922 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0224 01:00:59.747879 21922 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0224 01:00:59.747924 21922 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0224 01:00:59.747931 21922 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0224 01:00:59.748009 21922 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0224 01:00:59.748018 21922 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0224 01:00:59.750647 21922 out.go:204] - Booting up control plane ...
I0224 01:00:59.750742 21922 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0224 01:00:59.750755 21922 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0224 01:00:59.750830 21922 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0224 01:00:59.750845 21922 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0224 01:00:59.750912 21922 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0224 01:00:59.750924 21922 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0224 01:00:59.751017 21922 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0224 01:00:59.751029 21922 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0224 01:00:59.751189 21922 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0224 01:00:59.751197 21922 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0224 01:00:59.751292 21922 command_runner.go:130] > [apiclient] All control plane components are healthy after 10.503870 seconds
I0224 01:00:59.751300 21922 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503870 seconds
I0224 01:00:59.751411 21922 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0224 01:00:59.751419 21922 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0224 01:00:59.751574 21922 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0224 01:00:59.751583 21922 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0224 01:00:59.751629 21922 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
I0224 01:00:59.751634 21922 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0224 01:00:59.751784 21922 command_runner.go:130] > [mark-control-plane] Marking the node multinode-858631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0224 01:00:59.751790 21922 kubeadm.go:322] [mark-control-plane] Marking the node multinode-858631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0224 01:00:59.751839 21922 command_runner.go:130] > [bootstrap-token] Using token: wc0vru.55w1txftddrsz4y0
I0224 01:00:59.751845 21922 kubeadm.go:322] [bootstrap-token] Using token: wc0vru.55w1txftddrsz4y0
I0224 01:00:59.753292 21922 out.go:204] - Configuring RBAC rules ...
I0224 01:00:59.753399 21922 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0224 01:00:59.753412 21922 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0224 01:00:59.753524 21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0224 01:00:59.753532 21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0224 01:00:59.753647 21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0224 01:00:59.753655 21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0224 01:00:59.753754 21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0224 01:00:59.753758 21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0224 01:00:59.753852 21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0224 01:00:59.753855 21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0224 01:00:59.753922 21922 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0224 01:00:59.753928 21922 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0224 01:00:59.754036 21922 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0224 01:00:59.754043 21922 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0224 01:00:59.754077 21922 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0224 01:00:59.754082 21922 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0224 01:00:59.754117 21922 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0224 01:00:59.754122 21922 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0224 01:00:59.754126 21922 kubeadm.go:322]
I0224 01:00:59.754174 21922 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
I0224 01:00:59.754180 21922 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0224 01:00:59.754183 21922 kubeadm.go:322]
I0224 01:00:59.754243 21922 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
I0224 01:00:59.754249 21922 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0224 01:00:59.754252 21922 kubeadm.go:322]
I0224 01:00:59.754272 21922 command_runner.go:130] > mkdir -p $HOME/.kube
I0224 01:00:59.754278 21922 kubeadm.go:322] mkdir -p $HOME/.kube
I0224 01:00:59.754358 21922 command_runner.go:130] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0224 01:00:59.754371 21922 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0224 01:00:59.754441 21922 command_runner.go:130] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0224 01:00:59.754450 21922 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0224 01:00:59.754456 21922 kubeadm.go:322]
I0224 01:00:59.754528 21922 command_runner.go:130] > Alternatively, if you are the root user, you can run:
I0224 01:00:59.754536 21922 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0224 01:00:59.754541 21922 kubeadm.go:322]
I0224 01:00:59.754620 21922 command_runner.go:130] > export KUBECONFIG=/etc/kubernetes/admin.conf
I0224 01:00:59.754629 21922 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0224 01:00:59.754634 21922 kubeadm.go:322]
I0224 01:00:59.754702 21922 command_runner.go:130] > You should now deploy a pod network to the cluster.
I0224 01:00:59.754711 21922 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0224 01:00:59.754819 21922 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0224 01:00:59.754838 21922 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0224 01:00:59.754927 21922 command_runner.go:130] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0224 01:00:59.754936 21922 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0224 01:00:59.754947 21922 kubeadm.go:322]
I0224 01:00:59.755050 21922 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
I0224 01:00:59.755057 21922 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0224 01:00:59.755161 21922 command_runner.go:130] > and service account keys on each node and then running the following as root:
I0224 01:00:59.755173 21922 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0224 01:00:59.755179 21922 kubeadm.go:322]
I0224 01:00:59.755275 21922 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
I0224 01:00:59.755286 21922 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
I0224 01:00:59.755398 21922 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 \
I0224 01:00:59.755408 21922 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 \
I0224 01:00:59.755431 21922 command_runner.go:130] > --control-plane
I0224 01:00:59.755444 21922 kubeadm.go:322] --control-plane
I0224 01:00:59.755456 21922 kubeadm.go:322]
I0224 01:00:59.755555 21922 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
I0224 01:00:59.755563 21922 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0224 01:00:59.755568 21922 kubeadm.go:322]
I0224 01:00:59.755668 21922 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
I0224 01:00:59.755676 21922 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
I0224 01:00:59.755788 21922 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1
I0224 01:00:59.755805 21922 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1
I0224 01:00:59.755816 21922 cni.go:84] Creating CNI manager for ""
I0224 01:00:59.755830 21922 cni.go:136] 1 nodes found, recommending kindnet
I0224 01:00:59.757492 21922 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0224 01:00:59.758857 21922 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0224 01:00:59.769348 21922 command_runner.go:130] > File: /opt/cni/bin/portmap
I0224 01:00:59.769367 21922 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0224 01:00:59.769376 21922 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0224 01:00:59.769386 21922 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0224 01:00:59.769395 21922 command_runner.go:130] > Access: 2023-02-24 01:00:20.396182736 +0000
I0224 01:00:59.769404 21922 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
I0224 01:00:59.769413 21922 command_runner.go:130] > Change: 2023-02-24 01:00:18.603182736 +0000
I0224 01:00:59.769423 21922 command_runner.go:130] > Birth: -
I0224 01:00:59.769568 21922 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0224 01:00:59.769585 21922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0224 01:00:59.810357 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0224 01:01:00.838101 21922 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
I0224 01:01:00.846024 21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0224 01:01:00.854961 21922 command_runner.go:130] > serviceaccount/kindnet created
I0224 01:01:00.872715 21922 command_runner.go:130] > daemonset.apps/kindnet created
I0224 01:01:00.876230 21922 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.065843677s)
I0224 01:01:00.876272 21922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0224 01:01:00.876358 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:00.876406 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510 minikube.k8s.io/name=multinode-858631 minikube.k8s.io/updated_at=2023_02_24T01_01_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:00.902234 21922 command_runner.go:130] > -16
I0224 01:01:00.902361 21922 ops.go:34] apiserver oom_adj: -16
I0224 01:01:01.028438 21922 command_runner.go:130] > node/multinode-858631 labeled
I0224 01:01:01.028488 21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
I0224 01:01:01.028594 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:01.108266 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:01.609302 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:01.699333 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:02.108806 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:02.198922 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:02.609703 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:02.697066 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:03.109297 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:03.216079 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:03.609289 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:03.700025 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:04.109222 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:04.209162 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:04.609636 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:04.697004 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:05.109567 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:05.225327 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:05.609539 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:05.698814 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:06.109440 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:06.206194 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:06.609075 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:06.696484 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:07.109227 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:07.204309 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:07.608808 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:07.689929 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:08.109675 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:08.217755 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:08.609397 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:08.684146 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:09.109564 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:09.286262 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:09.609698 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:09.710691 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:10.108903 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:10.216001 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:10.608840 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:10.708540 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:11.109253 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:11.253967 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:11.609554 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:11.719570 21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0224 01:01:12.109086 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0224 01:01:12.213727 21922 command_runner.go:130] > NAME SECRETS AGE
I0224 01:01:12.213753 21922 command_runner.go:130] > default 0 1s
I0224 01:01:12.215256 21922 kubeadm.go:1073] duration metric: took 11.338954657s to wait for elevateKubeSystemPrivileges.
I0224 01:01:12.215285 21922 kubeadm.go:403] StartCluster complete in 27.771261829s
I0224 01:01:12.215305 21922 settings.go:142] acquiring lock: {Name:mk174257a2297336a9e6f80080faa7ef819759a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:01:12.215390 21922 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:01:12.216091 21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/kubeconfig: {Name:mk7a14c2c6ccf91ba70e9a5ad74574ac5676cf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:01:12.216320 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0224 01:01:12.216450 21922 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0224 01:01:12.216511 21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:01:12.216544 21922 addons.go:65] Setting default-storageclass=true in profile "multinode-858631"
I0224 01:01:12.216559 21922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-858631"
I0224 01:01:12.216536 21922 addons.go:65] Setting storage-provisioner=true in profile "multinode-858631"
I0224 01:01:12.216602 21922 addons.go:227] Setting addon storage-provisioner=true in "multinode-858631"
I0224 01:01:12.216659 21922 host.go:66] Checking if "multinode-858631" exists ...
I0224 01:01:12.216665 21922 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:01:12.216956 21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 01:01:12.217047 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:01:12.217053 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:01:12.217076 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:01:12.217078 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:01:12.217813 21922 cert_rotation.go:137] Starting client certificate rotation controller
I0224 01:01:12.217937 21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0224 01:01:12.217951 21922 round_trippers.go:469] Request Headers:
I0224 01:01:12.217959 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:12.217966 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:12.232172 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
I0224 01:01:12.232489 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
I0224 01:01:12.232557 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:01:12.232849 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:01:12.233027 21922 main.go:141] libmachine: Using API Version 1
I0224 01:01:12.233048 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:01:12.233285 21922 main.go:141] libmachine: Using API Version 1
I0224 01:01:12.233310 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:01:12.233391 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:01:12.233611 21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
I0224 01:01:12.233617 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:01:12.234165 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:01:12.234213 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:01:12.235121 21922 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
I0224 01:01:12.235141 21922 round_trippers.go:577] Response Headers:
I0224 01:01:12.235151 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:12.235160 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:12.235169 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:12.235178 21922 round_trippers.go:580] Content-Length: 291
I0224 01:01:12.235185 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:12 GMT
I0224 01:01:12.235192 21922 round_trippers.go:580] Audit-Id: 6f7763fc-fcae-4207-bdd2-f51554563a10
I0224 01:01:12.235200 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:12.235227 21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"352","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0224 01:01:12.235598 21922 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"352","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0224 01:01:12.235635 21922 round_trippers.go:463] PUT https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0224 01:01:12.235638 21922 round_trippers.go:469] Request Headers:
I0224 01:01:12.235645 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:12.235651 21922 round_trippers.go:473] Content-Type: application/json
I0224 01:01:12.235657 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:12.235766 21922 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:01:12.236092 21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 01:01:12.236462 21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses
I0224 01:01:12.236478 21922 round_trippers.go:469] Request Headers:
I0224 01:01:12.236490 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:12.236500 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:12.240657 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:01:12.240677 21922 round_trippers.go:577] Response Headers:
I0224 01:01:12.240687 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:12.240697 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:12.240709 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:12.240720 21922 round_trippers.go:580] Content-Length: 109
I0224 01:01:12.240732 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:12 GMT
I0224 01:01:12.240744 21922 round_trippers.go:580] Audit-Id: 86b2bafb-230d-4596-8c45-d078c1ca8038
I0224 01:01:12.240757 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:12.240776 21922 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"352"},"items":[]}
I0224 01:01:12.241022 21922 addons.go:227] Setting addon default-storageclass=true in "multinode-858631"
I0224 01:01:12.241054 21922 host.go:66] Checking if "multinode-858631" exists ...
I0224 01:01:12.241394 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:01:12.241435 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:01:12.248639 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
I0224 01:01:12.249093 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:01:12.249620 21922 main.go:141] libmachine: Using API Version 1
I0224 01:01:12.249638 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:01:12.249930 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:01:12.250122 21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
I0224 01:01:12.251892 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:01:12.254052 21922 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0224 01:01:12.252824 21922 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
I0224 01:01:12.255572 21922 round_trippers.go:577] Response Headers:
I0224 01:01:12.255586 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:12.255600 21922 round_trippers.go:580] Content-Length: 291
I0224 01:01:12.255610 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:12 GMT
I0224 01:01:12.255622 21922 round_trippers.go:580] Audit-Id: b1655ca1-a32d-41b1-aa84-2e1341e00c48
I0224 01:01:12.255633 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:12.255644 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:12.255654 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:12.255684 21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"353","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0224 01:01:12.255699 21922 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0224 01:01:12.255718 21922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0224 01:01:12.255741 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:01:12.256724 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
I0224 01:01:12.257065 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:01:12.257563 21922 main.go:141] libmachine: Using API Version 1
I0224 01:01:12.257590 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:01:12.258021 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:01:12.258536 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:01:12.258577 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:01:12.259144 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:01:12.259618 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:01:12.259646 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:01:12.259793 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:01:12.259963 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:01:12.260106 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:01:12.260222 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:01:12.272835 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
I0224 01:01:12.273230 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:01:12.273705 21922 main.go:141] libmachine: Using API Version 1
I0224 01:01:12.273729 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:01:12.274061 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:01:12.274270 21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
I0224 01:01:12.275817 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:01:12.276041 21922 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0224 01:01:12.276055 21922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0224 01:01:12.276067 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:01:12.278828 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:01:12.279229 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:01:12.279256 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:01:12.279526 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:01:12.279695 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:01:12.279847 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:01:12.279958 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:01:12.493779 21922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0224 01:01:12.609491 21922 command_runner.go:130] > apiVersion: v1
I0224 01:01:12.609510 21922 command_runner.go:130] > data:
I0224 01:01:12.609514 21922 command_runner.go:130] > Corefile: |
I0224 01:01:12.609518 21922 command_runner.go:130] > .:53 {
I0224 01:01:12.609522 21922 command_runner.go:130] > errors
I0224 01:01:12.609526 21922 command_runner.go:130] > health {
I0224 01:01:12.609531 21922 command_runner.go:130] > lameduck 5s
I0224 01:01:12.609534 21922 command_runner.go:130] > }
I0224 01:01:12.609538 21922 command_runner.go:130] > ready
I0224 01:01:12.609544 21922 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0224 01:01:12.609548 21922 command_runner.go:130] > pods insecure
I0224 01:01:12.609553 21922 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0224 01:01:12.609564 21922 command_runner.go:130] > ttl 30
I0224 01:01:12.609568 21922 command_runner.go:130] > }
I0224 01:01:12.609576 21922 command_runner.go:130] > prometheus :9153
I0224 01:01:12.609581 21922 command_runner.go:130] > forward . /etc/resolv.conf {
I0224 01:01:12.609588 21922 command_runner.go:130] > max_concurrent 1000
I0224 01:01:12.609591 21922 command_runner.go:130] > }
I0224 01:01:12.609596 21922 command_runner.go:130] > cache 30
I0224 01:01:12.609603 21922 command_runner.go:130] > loop
I0224 01:01:12.609606 21922 command_runner.go:130] > reload
I0224 01:01:12.609610 21922 command_runner.go:130] > loadbalance
I0224 01:01:12.609614 21922 command_runner.go:130] > }
I0224 01:01:12.609618 21922 command_runner.go:130] > kind: ConfigMap
I0224 01:01:12.609622 21922 command_runner.go:130] > metadata:
I0224 01:01:12.609630 21922 command_runner.go:130] > creationTimestamp: "2023-02-24T01:00:59Z"
I0224 01:01:12.609636 21922 command_runner.go:130] > name: coredns
I0224 01:01:12.609641 21922 command_runner.go:130] > namespace: kube-system
I0224 01:01:12.609646 21922 command_runner.go:130] > resourceVersion: "237"
I0224 01:01:12.609651 21922 command_runner.go:130] > uid: 9d4033e4-3349-4156-a8a9-b90674355b37
I0224 01:01:12.611617 21922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 01:01:12.636652 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0224 01:01:12.756223 21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0224 01:01:12.756241 21922 round_trippers.go:469] Request Headers:
I0224 01:01:12.756249 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:12.756255 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:12.759162 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:12.759177 21922 round_trippers.go:577] Response Headers:
I0224 01:01:12.759185 21922 round_trippers.go:580] Audit-Id: 47bc6633-bcde-441e-aa15-cff68c986372
I0224 01:01:12.759192 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:12.759200 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:12.759208 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:12.759220 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:12.759230 21922 round_trippers.go:580] Content-Length: 291
I0224 01:01:12.759241 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:12 GMT
I0224 01:01:12.759262 21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"363","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0224 01:01:12.759400 21922 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-858631" context rescaled to 1 replicas
I0224 01:01:12.759428 21922 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0224 01:01:12.761111 21922 out.go:177] * Verifying Kubernetes components...
I0224 01:01:12.763028 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 01:01:13.433404 21922 command_runner.go:130] > storageclass.storage.k8s.io/standard created
I0224 01:01:13.435510 21922 main.go:141] libmachine: Making call to close driver server
I0224 01:01:13.435529 21922 main.go:141] libmachine: (multinode-858631) Calling .Close
I0224 01:01:13.435849 21922 main.go:141] libmachine: Successfully made call to close driver server
I0224 01:01:13.435865 21922 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 01:01:13.435875 21922 main.go:141] libmachine: Making call to close driver server
I0224 01:01:13.435883 21922 main.go:141] libmachine: (multinode-858631) Calling .Close
I0224 01:01:13.436083 21922 main.go:141] libmachine: Successfully made call to close driver server
I0224 01:01:13.436108 21922 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 01:01:13.436121 21922 main.go:141] libmachine: Making call to close driver server
I0224 01:01:13.436136 21922 main.go:141] libmachine: (multinode-858631) Calling .Close
I0224 01:01:13.436151 21922 main.go:141] libmachine: (multinode-858631) DBG | Closing plugin on server side
I0224 01:01:13.436358 21922 main.go:141] libmachine: Successfully made call to close driver server
I0224 01:01:13.436373 21922 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 01:01:13.436359 21922 main.go:141] libmachine: (multinode-858631) DBG | Closing plugin on server side
I0224 01:01:13.543744 21922 command_runner.go:130] > serviceaccount/storage-provisioner created
I0224 01:01:13.543773 21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
I0224 01:01:13.543783 21922 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0224 01:01:13.543799 21922 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0224 01:01:13.543808 21922 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
I0224 01:01:13.543817 21922 command_runner.go:130] > pod/storage-provisioner created
I0224 01:01:13.543853 21922 main.go:141] libmachine: Making call to close driver server
I0224 01:01:13.543870 21922 main.go:141] libmachine: (multinode-858631) Calling .Close
I0224 01:01:13.544150 21922 main.go:141] libmachine: Successfully made call to close driver server
I0224 01:01:13.544166 21922 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 01:01:13.544176 21922 main.go:141] libmachine: Making call to close driver server
I0224 01:01:13.544176 21922 main.go:141] libmachine: (multinode-858631) DBG | Closing plugin on server side
I0224 01:01:13.544184 21922 main.go:141] libmachine: (multinode-858631) Calling .Close
I0224 01:01:13.544532 21922 main.go:141] libmachine: Successfully made call to close driver server
I0224 01:01:13.544545 21922 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 01:01:13.546148 21922 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0224 01:01:13.547278 21922 addons.go:492] enable addons completed in 1.330830289s: enabled=[default-storageclass storage-provisioner]
I0224 01:01:13.577322 21922 command_runner.go:130] > configmap/coredns replaced
I0224 01:01:13.580124 21922 start.go:921] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0224 01:01:13.580426 21922 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:01:13.580620 21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 01:01:13.580843 21922 node_ready.go:35] waiting up to 6m0s for node "multinode-858631" to be "Ready" ...
I0224 01:01:13.580893 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:13.580900 21922 round_trippers.go:469] Request Headers:
I0224 01:01:13.580908 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:13.580917 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:13.583098 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:13.583114 21922 round_trippers.go:577] Response Headers:
I0224 01:01:13.583121 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:13 GMT
I0224 01:01:13.583127 21922 round_trippers.go:580] Audit-Id: 5d470667-6887-446a-bebf-6e3e2aea5567
I0224 01:01:13.583135 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:13.583143 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:13.583151 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:13.583157 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:13.583285 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:14.084606 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:14.084630 21922 round_trippers.go:469] Request Headers:
I0224 01:01:14.084638 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:14.084644 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:14.087319 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:14.087339 21922 round_trippers.go:577] Response Headers:
I0224 01:01:14.087346 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:14.087352 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:14.087357 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:14.087363 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:14.087369 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:14 GMT
I0224 01:01:14.087381 21922 round_trippers.go:580] Audit-Id: 3e148909-924b-4e56-9530-4d72b3e00728
I0224 01:01:14.088003 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:14.584763 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:14.584787 21922 round_trippers.go:469] Request Headers:
I0224 01:01:14.584795 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:14.584801 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:14.587112 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:14.587133 21922 round_trippers.go:577] Response Headers:
I0224 01:01:14.587140 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:14.587146 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:14 GMT
I0224 01:01:14.587152 21922 round_trippers.go:580] Audit-Id: 73fe93f4-f96c-45f9-a326-fa27738ab670
I0224 01:01:14.587157 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:14.587169 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:14.587174 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:14.587324 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:15.084965 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:15.084991 21922 round_trippers.go:469] Request Headers:
I0224 01:01:15.085004 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:15.085014 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:15.087317 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:15.087335 21922 round_trippers.go:577] Response Headers:
I0224 01:01:15.087342 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:15.087348 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:15.087353 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:15.087358 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:15 GMT
I0224 01:01:15.087370 21922 round_trippers.go:580] Audit-Id: 7e26222f-1b9b-454f-90dd-f6fd98d9d7e0
I0224 01:01:15.087378 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:15.087703 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:15.584161 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:15.584199 21922 round_trippers.go:469] Request Headers:
I0224 01:01:15.584207 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:15.584213 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:15.586675 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:15.586692 21922 round_trippers.go:577] Response Headers:
I0224 01:01:15.586699 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:15 GMT
I0224 01:01:15.586711 21922 round_trippers.go:580] Audit-Id: 1a17cd53-e8b0-414f-b592-0bd076c56659
I0224 01:01:15.586722 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:15.586738 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:15.586750 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:15.586760 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:15.586870 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:15.587193 21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
I0224 01:01:16.084499 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:16.084524 21922 round_trippers.go:469] Request Headers:
I0224 01:01:16.084540 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:16.084548 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:16.088844 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:01:16.088864 21922 round_trippers.go:577] Response Headers:
I0224 01:01:16.088871 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:16.088886 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:16 GMT
I0224 01:01:16.088898 21922 round_trippers.go:580] Audit-Id: 2a09160b-8c5c-43ec-914f-898c6fdcd59f
I0224 01:01:16.088908 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:16.088917 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:16.088928 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:16.089187 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:16.584888 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:16.584915 21922 round_trippers.go:469] Request Headers:
I0224 01:01:16.584926 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:16.584934 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:16.587497 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:16.587517 21922 round_trippers.go:577] Response Headers:
I0224 01:01:16.587524 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:16.587535 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:16.587549 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:16 GMT
I0224 01:01:16.587567 21922 round_trippers.go:580] Audit-Id: e6ab8071-7f78-4427-892c-817dab6fea51
I0224 01:01:16.587575 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:16.587584 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:16.587844 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:17.084561 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:17.084586 21922 round_trippers.go:469] Request Headers:
I0224 01:01:17.084598 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:17.084606 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:17.087457 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:17.087475 21922 round_trippers.go:577] Response Headers:
I0224 01:01:17.087481 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:17.087487 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:17.087493 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:17 GMT
I0224 01:01:17.087504 21922 round_trippers.go:580] Audit-Id: 3b897faa-02de-4667-aae4-379682378ba7
I0224 01:01:17.087517 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:17.087526 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:17.087836 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:17.583949 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:17.583972 21922 round_trippers.go:469] Request Headers:
I0224 01:01:17.583980 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:17.583991 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:17.586623 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:17.586648 21922 round_trippers.go:577] Response Headers:
I0224 01:01:17.586657 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:17 GMT
I0224 01:01:17.586667 21922 round_trippers.go:580] Audit-Id: 8ddbc8e0-0198-417b-bbc6-bd40f1c2724c
I0224 01:01:17.586676 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:17.586684 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:17.586693 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:17.586702 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:17.586834 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:18.084547 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:18.084576 21922 round_trippers.go:469] Request Headers:
I0224 01:01:18.084588 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:18.084598 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:18.087229 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:18.087252 21922 round_trippers.go:577] Response Headers:
I0224 01:01:18.087263 21922 round_trippers.go:580] Audit-Id: cd6f5bc7-5d30-4584-8c61-ddc6f9c0549d
I0224 01:01:18.087272 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:18.087280 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:18.087289 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:18.087301 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:18.087310 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:18 GMT
I0224 01:01:18.087578 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:18.087889 21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
I0224 01:01:18.584271 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:18.584299 21922 round_trippers.go:469] Request Headers:
I0224 01:01:18.584312 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:18.584323 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:18.587574 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:01:18.587592 21922 round_trippers.go:577] Response Headers:
I0224 01:01:18.587607 21922 round_trippers.go:580] Audit-Id: fb62ee42-0041-4aff-b80f-d8d1eba05ee6
I0224 01:01:18.587615 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:18.587623 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:18.587631 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:18.587640 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:18.587657 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:18 GMT
I0224 01:01:18.588201 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:19.084943 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:19.084967 21922 round_trippers.go:469] Request Headers:
I0224 01:01:19.084979 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:19.084989 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:19.087624 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:19.087645 21922 round_trippers.go:577] Response Headers:
I0224 01:01:19.087655 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:19 GMT
I0224 01:01:19.087664 21922 round_trippers.go:580] Audit-Id: 53374b9c-4b52-4235-a921-a68a6393da76
I0224 01:01:19.087671 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:19.087681 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:19.087694 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:19.087707 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:19.087991 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:19.584679 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:19.584700 21922 round_trippers.go:469] Request Headers:
I0224 01:01:19.584708 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:19.584715 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:19.587473 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:19.587492 21922 round_trippers.go:577] Response Headers:
I0224 01:01:19.587500 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:19.587506 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:19.587512 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:19 GMT
I0224 01:01:19.587517 21922 round_trippers.go:580] Audit-Id: 77c01102-6483-4c90-b2db-304243fc4bbb
I0224 01:01:19.587523 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:19.587528 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:19.587919 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:20.084608 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:20.084628 21922 round_trippers.go:469] Request Headers:
I0224 01:01:20.084635 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:20.084642 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:20.087275 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:20.087295 21922 round_trippers.go:577] Response Headers:
I0224 01:01:20.087302 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:20.087308 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:20.087315 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:20.087320 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:20.087325 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:20 GMT
I0224 01:01:20.087331 21922 round_trippers.go:580] Audit-Id: aa93f272-50e8-41fd-8eb7-e5279ff6a5ec
I0224 01:01:20.087709 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:20.087978 21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
I0224 01:01:20.584338 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:20.584359 21922 round_trippers.go:469] Request Headers:
I0224 01:01:20.584367 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:20.584373 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:20.587638 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:01:20.587660 21922 round_trippers.go:577] Response Headers:
I0224 01:01:20.587670 21922 round_trippers.go:580] Audit-Id: e724f7fd-144d-41ad-833f-d1da3b4a75a6
I0224 01:01:20.587680 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:20.587688 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:20.587696 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:20.587701 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:20.587707 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:20 GMT
I0224 01:01:20.588084 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:21.084770 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:21.084792 21922 round_trippers.go:469] Request Headers:
I0224 01:01:21.084800 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:21.084806 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:21.087215 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:21.087233 21922 round_trippers.go:577] Response Headers:
I0224 01:01:21.087239 21922 round_trippers.go:580] Audit-Id: 45b773b9-3d4d-4dd6-bb4e-6b60f9caf176
I0224 01:01:21.087245 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:21.087251 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:21.087256 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:21.087261 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:21.087267 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:21 GMT
I0224 01:01:21.087669 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:21.584311 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:21.584339 21922 round_trippers.go:469] Request Headers:
I0224 01:01:21.584347 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:21.584353 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:21.586563 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:21.586581 21922 round_trippers.go:577] Response Headers:
I0224 01:01:21.586589 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:21 GMT
I0224 01:01:21.586594 21922 round_trippers.go:580] Audit-Id: 3a4c58f4-45f2-4b2e-a3c7-879c2062e2b2
I0224 01:01:21.586600 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:21.586605 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:21.586612 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:21.586625 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:21.586930 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:22.084668 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:22.084692 21922 round_trippers.go:469] Request Headers:
I0224 01:01:22.084701 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:22.084707 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:22.087278 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:22.087298 21922 round_trippers.go:577] Response Headers:
I0224 01:01:22.087306 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:22.087312 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:22.087317 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:22.087323 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:22.087332 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:22 GMT
I0224 01:01:22.087337 21922 round_trippers.go:580] Audit-Id: ae197a04-b5b2-42b1-9878-150150dc0c4c
I0224 01:01:22.087775 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:22.088042 21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
I0224 01:01:22.584035 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:22.584057 21922 round_trippers.go:469] Request Headers:
I0224 01:01:22.584065 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:22.584071 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:22.586776 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:22.586796 21922 round_trippers.go:577] Response Headers:
I0224 01:01:22.586803 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:22 GMT
I0224 01:01:22.586809 21922 round_trippers.go:580] Audit-Id: 010459f3-6025-4ff9-9e08-60193e099995
I0224 01:01:22.586818 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:22.586827 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:22.586835 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:22.586844 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:22.587125 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:23.084366 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:23.084389 21922 round_trippers.go:469] Request Headers:
I0224 01:01:23.084397 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:23.084403 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:23.087520 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:01:23.087536 21922 round_trippers.go:577] Response Headers:
I0224 01:01:23.087543 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:23.087549 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:23.087554 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:23.087566 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:23.087575 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:23 GMT
I0224 01:01:23.087585 21922 round_trippers.go:580] Audit-Id: 038e94a7-0fef-4d4f-8620-4604d14e25ff
I0224 01:01:23.087811 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
I0224 01:01:23.584487 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:23.584509 21922 round_trippers.go:469] Request Headers:
I0224 01:01:23.584518 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:23.584524 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:23.587307 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:23.587328 21922 round_trippers.go:577] Response Headers:
I0224 01:01:23.587339 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:23.587349 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:23.587357 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:23.587363 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:23.587368 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:23 GMT
I0224 01:01:23.587374 21922 round_trippers.go:580] Audit-Id: b15d1cc7-31d6-4eb2-991f-77b0e1076313
I0224 01:01:23.587801 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:23.588125 21922 node_ready.go:49] node "multinode-858631" has status "Ready":"True"
I0224 01:01:23.588142 21922 node_ready.go:38] duration metric: took 10.007287326s waiting for node "multinode-858631" to be "Ready" ...
I0224 01:01:23.588149 21922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 01:01:23.588219 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
I0224 01:01:23.588227 21922 round_trippers.go:469] Request Headers:
I0224 01:01:23.588234 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:23.588240 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:23.591200 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:23.591218 21922 round_trippers.go:577] Response Headers:
I0224 01:01:23.591227 21922 round_trippers.go:580] Audit-Id: 58254c31-970b-43fb-a5f9-7b97c167eba5
I0224 01:01:23.591235 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:23.591244 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:23.591253 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:23.591263 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:23.591276 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:23 GMT
I0224 01:01:23.592076 21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53937 chars]
I0224 01:01:23.594870 21922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
I0224 01:01:23.594923 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:23.594927 21922 round_trippers.go:469] Request Headers:
I0224 01:01:23.594934 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:23.594941 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:23.596919 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:23.596940 21922 round_trippers.go:577] Response Headers:
I0224 01:01:23.596950 21922 round_trippers.go:580] Audit-Id: 7263b68a-6b58-4099-92df-5539157b5de2
I0224 01:01:23.596958 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:23.596966 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:23.596975 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:23.596992 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:23.597001 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:23 GMT
I0224 01:01:23.597274 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
I0224 01:01:23.597827 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:23.597851 21922 round_trippers.go:469] Request Headers:
I0224 01:01:23.597862 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:23.597871 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:23.599672 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:23.599684 21922 round_trippers.go:577] Response Headers:
I0224 01:01:23.599690 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:23 GMT
I0224 01:01:23.599695 21922 round_trippers.go:580] Audit-Id: 17b216d3-bf4e-45f3-8059-a88bb1d93e80
I0224 01:01:23.599704 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:23.599712 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:23.599721 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:23.599731 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:23.599882 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:24.100893 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:24.100917 21922 round_trippers.go:469] Request Headers:
I0224 01:01:24.100928 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:24.100936 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:24.104029 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:01:24.104049 21922 round_trippers.go:577] Response Headers:
I0224 01:01:24.104060 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:24.104068 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:24.104076 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:24 GMT
I0224 01:01:24.104086 21922 round_trippers.go:580] Audit-Id: a119985b-691f-46ca-929a-138d1889bfc8
I0224 01:01:24.104096 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:24.104106 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:24.104254 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
I0224 01:01:24.104866 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:24.104885 21922 round_trippers.go:469] Request Headers:
I0224 01:01:24.104895 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:24.104908 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:24.107175 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:24.107190 21922 round_trippers.go:577] Response Headers:
I0224 01:01:24.107199 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:24 GMT
I0224 01:01:24.107207 21922 round_trippers.go:580] Audit-Id: ee04c0cd-aacd-4c7a-a15b-8bd4a0754dbb
I0224 01:01:24.107216 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:24.107223 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:24.107232 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:24.107250 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:24.107587 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:24.601291 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:24.601317 21922 round_trippers.go:469] Request Headers:
I0224 01:01:24.601325 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:24.601331 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:24.603562 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:24.603581 21922 round_trippers.go:577] Response Headers:
I0224 01:01:24.603590 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:24.603598 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:24 GMT
I0224 01:01:24.603607 21922 round_trippers.go:580] Audit-Id: c71d1331-6dd6-4b84-b7f2-0085bc89a790
I0224 01:01:24.603616 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:24.603625 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:24.603631 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:24.603868 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
I0224 01:01:24.604264 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:24.604274 21922 round_trippers.go:469] Request Headers:
I0224 01:01:24.604281 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:24.604287 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:24.608705 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:01:24.608725 21922 round_trippers.go:577] Response Headers:
I0224 01:01:24.608734 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:24.608743 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:24 GMT
I0224 01:01:24.608750 21922 round_trippers.go:580] Audit-Id: 7154afb8-b1e9-4d04-9f66-80e749a371dd
I0224 01:01:24.608755 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:24.608761 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:24.608766 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:24.608978 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:25.100631 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:25.100653 21922 round_trippers.go:469] Request Headers:
I0224 01:01:25.100661 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:25.100668 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:25.103193 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:25.103213 21922 round_trippers.go:577] Response Headers:
I0224 01:01:25.103221 21922 round_trippers.go:580] Audit-Id: d694012a-3eac-4d86-ab59-d7b7ba9d8d71
I0224 01:01:25.103227 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:25.103232 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:25.103237 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:25.103242 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:25.103248 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:25 GMT
I0224 01:01:25.103690 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
I0224 01:01:25.104084 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:25.104095 21922 round_trippers.go:469] Request Headers:
I0224 01:01:25.104103 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:25.104109 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:25.106291 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:25.106323 21922 round_trippers.go:577] Response Headers:
I0224 01:01:25.106334 21922 round_trippers.go:580] Audit-Id: 274b4dd5-2725-4c3b-929e-2e2b71aefcf1
I0224 01:01:25.106341 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:25.106350 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:25.106355 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:25.106362 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:25.106369 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:25 GMT
I0224 01:01:25.106638 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:25.600273 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:25.600300 21922 round_trippers.go:469] Request Headers:
I0224 01:01:25.600312 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:25.600323 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:25.602430 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:25.602455 21922 round_trippers.go:577] Response Headers:
I0224 01:01:25.602466 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:25.602475 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:25 GMT
I0224 01:01:25.602487 21922 round_trippers.go:580] Audit-Id: 8cd1c153-4732-44b6-bd55-3054e44f6ac3
I0224 01:01:25.602496 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:25.602509 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:25.602521 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:25.603051 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
I0224 01:01:25.603685 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:25.603702 21922 round_trippers.go:469] Request Headers:
I0224 01:01:25.603713 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:25.603724 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:25.605843 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:25.605863 21922 round_trippers.go:577] Response Headers:
I0224 01:01:25.605872 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:25.605880 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:25.605888 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:25.605903 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:25 GMT
I0224 01:01:25.605911 21922 round_trippers.go:580] Audit-Id: 2e4b5ea2-f2f2-4bee-86e8-a582e12e5fdb
I0224 01:01:25.605920 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:25.606052 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:25.606389 21922 pod_ready.go:102] pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace has status "Ready":"False"
I0224 01:01:26.100708 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:26.100732 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.100741 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.100747 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.103451 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:26.103467 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.103474 21922 round_trippers.go:580] Audit-Id: 5c7dfe86-4aa4-40c2-9ee9-3313dae7a357
I0224 01:01:26.103480 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.103485 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.103490 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.103495 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.103501 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.103894 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
I0224 01:01:26.104277 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:26.104287 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.104294 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.104300 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.106662 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:26.106682 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.106693 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.106710 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.106719 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.106726 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.106734 21922 round_trippers.go:580] Audit-Id: ef9bae96-0f84-4647-b2e4-a5f5a64a068e
I0224 01:01:26.106744 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.106851 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:26.600420 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:01:26.600439 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.600447 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.600454 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.602649 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:26.602674 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.602683 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.602692 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.602701 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.602709 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.602719 21922 round_trippers.go:580] Audit-Id: e88d8e9c-5391-4a14-941d-de87cfccb39f
I0224 01:01:26.602727 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.602859 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
I0224 01:01:26.603477 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:26.603498 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.603508 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.603518 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.606055 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:26.606074 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.606083 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.606092 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.606101 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.606112 21922 round_trippers.go:580] Audit-Id: ebb2e441-cc8c-4c0d-b4ca-ed0b45f9b3d2
I0224 01:01:26.606119 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.606125 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.606293 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:26.606635 21922 pod_ready.go:92] pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace has status "Ready":"True"
I0224 01:01:26.606655 21922 pod_ready.go:81] duration metric: took 3.011766765s waiting for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.606664 21922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.606703 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-858631
I0224 01:01:26.606713 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.606724 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.606737 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.608533 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.608548 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.608558 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.608570 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.608582 21922 round_trippers.go:580] Audit-Id: b124132b-7462-4604-a671-a0a09a7a5cec
I0224 01:01:26.608591 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.608603 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.608613 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.608756 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-858631","namespace":"kube-system","uid":"7b4b146b-12c8-4b3f-a682-8ab64a9135cb","resourceVersion":"276","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.mirror":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.seen":"2023-02-24T01:00:59.730785607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5856 chars]
I0224 01:01:26.609171 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:26.609183 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.609193 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.609202 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.611270 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:26.611286 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.611292 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.611301 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.611309 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.611319 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.611328 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.611338 21922 round_trippers.go:580] Audit-Id: 2e18661f-b8dc-435f-a212-7612404a7116
I0224 01:01:26.611455 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:26.611702 21922 pod_ready.go:92] pod "etcd-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:01:26.611714 21922 pod_ready.go:81] duration metric: took 5.043331ms waiting for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.611727 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.611767 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-858631
I0224 01:01:26.611777 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.611787 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.611796 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.613664 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.613678 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.613685 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.613690 21922 round_trippers.go:580] Audit-Id: f8ed5e9c-7381-42dc-8d48-fae056e18972
I0224 01:01:26.613695 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.613704 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.613720 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.613732 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.614050 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-858631","namespace":"kube-system","uid":"ad778dac-86be-4c5e-8b3f-2afb354e374a","resourceVersion":"299","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.mirror":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.seen":"2023-02-24T01:00:59.730814539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
I0224 01:01:26.614474 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:26.614486 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.614497 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.614507 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.616149 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.616163 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.616172 21922 round_trippers.go:580] Audit-Id: 8759fe08-74f9-46b1-a094-932be9a14de5
I0224 01:01:26.616181 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.616190 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.616200 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.616213 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.616226 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.616345 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:26.616645 21922 pod_ready.go:92] pod "kube-apiserver-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:01:26.616659 21922 pod_ready.go:81] duration metric: took 4.925024ms waiting for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.616669 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.616728 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-858631
I0224 01:01:26.616739 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.616750 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.616763 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.618337 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.618351 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.618362 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.618371 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.618382 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.618394 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.618404 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.618416 21922 round_trippers.go:580] Audit-Id: 23371965-5757-4789-b4cf-961c7cab57a8
I0224 01:01:26.618619 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-858631","namespace":"kube-system","uid":"c1e4ec9e-a1e9-4f43-8b1b-95c797d33242","resourceVersion":"272","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.mirror":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.seen":"2023-02-24T01:00:59.730815908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6957 chars]
I0224 01:01:26.618907 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:26.618919 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.618929 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.618938 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.620446 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.620460 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.620469 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.620475 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.620480 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.620487 21922 round_trippers.go:580] Audit-Id: d7721c7a-e679-4406-8a5c-6f5d29bc2451
I0224 01:01:26.620498 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.620511 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.620728 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:26.621125 21922 pod_ready.go:92] pod "kube-controller-manager-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:01:26.621141 21922 pod_ready.go:81] duration metric: took 4.459168ms waiting for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.621149 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.621196 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vlrn6
I0224 01:01:26.621206 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.621216 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.621228 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.622885 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.622900 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.622909 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.622918 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.622929 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.622945 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.622954 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.622963 21922 round_trippers.go:580] Audit-Id: 7c30379b-2e89-445e-99b1-1da9032541bd
I0224 01:01:26.624804 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vlrn6","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed1ab279-4267-4c3c-a68d-a729dc29f05b","resourceVersion":"367","creationTimestamp":"2023-02-24T01:01:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ec6a9ff-44a2-44e8-9e3b-270212238f31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec6a9ff-44a2-44e8-9e3b-270212238f31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0224 01:01:26.625630 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:26.625648 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.625659 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.625669 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.627243 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:01:26.627257 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.627264 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.627270 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.627275 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.627281 21922 round_trippers.go:580] Audit-Id: c2f4d362-c2cb-4103-921f-08e2de1fd269
I0224 01:01:26.627286 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.627294 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.627856 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:26.628067 21922 pod_ready.go:92] pod "kube-proxy-vlrn6" in "kube-system" namespace has status "Ready":"True"
I0224 01:01:26.628076 21922 pod_ready.go:81] duration metric: took 6.921536ms waiting for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.628082 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:26.801456 21922 request.go:622] Waited for 173.313227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
I0224 01:01:26.801519 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
I0224 01:01:26.801526 21922 round_trippers.go:469] Request Headers:
I0224 01:01:26.801535 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:26.801543 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:26.805761 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:01:26.805781 21922 round_trippers.go:577] Response Headers:
I0224 01:01:26.805794 21922 round_trippers.go:580] Audit-Id: 4f37a918-fae6-4aff-8189-325f152594da
I0224 01:01:26.805804 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:26.805822 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:26.805830 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:26.805840 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:26.805849 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:26 GMT
I0224 01:01:26.805979 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-858631","namespace":"kube-system","uid":"fcadaacc-9d90-4113-9bf9-b77ccbc47586","resourceVersion":"294","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.mirror":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.seen":"2023-02-24T01:00:59.730816890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4687 chars]
I0224 01:01:27.000585 21922 request.go:622] Waited for 194.277725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:27.000635 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:01:27.000652 21922 round_trippers.go:469] Request Headers:
I0224 01:01:27.000659 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:27.000669 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:27.002807 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:27.002824 21922 round_trippers.go:577] Response Headers:
I0224 01:01:27.002831 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:27 GMT
I0224 01:01:27.002837 21922 round_trippers.go:580] Audit-Id: 0f328137-69b3-49f6-af03-7a63cfbf62f8
I0224 01:01:27.002848 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:27.002861 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:27.002877 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:27.002887 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:27.003230 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
I0224 01:01:27.003488 21922 pod_ready.go:92] pod "kube-scheduler-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:01:27.003504 21922 pod_ready.go:81] duration metric: took 375.411134ms waiting for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:01:27.003514 21922 pod_ready.go:38] duration metric: took 3.41533873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 01:01:27.003531 21922 api_server.go:51] waiting for apiserver process to appear ...
I0224 01:01:27.003568 21922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 01:01:27.017182 21922 command_runner.go:130] > 1850
I0224 01:01:27.017235 21922 api_server.go:71] duration metric: took 14.257786757s to wait for apiserver process to appear ...
I0224 01:01:27.017248 21922 api_server.go:87] waiting for apiserver healthz status ...
I0224 01:01:27.017256 21922 api_server.go:252] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
I0224 01:01:27.022214 21922 api_server.go:278] https://192.168.39.217:8443/healthz returned 200:
ok
I0224 01:01:27.022265 21922 round_trippers.go:463] GET https://192.168.39.217:8443/version
I0224 01:01:27.022272 21922 round_trippers.go:469] Request Headers:
I0224 01:01:27.022287 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:27.022301 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:27.023168 21922 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0224 01:01:27.023181 21922 round_trippers.go:577] Response Headers:
I0224 01:01:27.023191 21922 round_trippers.go:580] Content-Length: 263
I0224 01:01:27.023199 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:27 GMT
I0224 01:01:27.023208 21922 round_trippers.go:580] Audit-Id: d273673d-546c-4f76-8eae-9c9d5b37652c
I0224 01:01:27.023221 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:27.023231 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:27.023245 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:27.023255 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:27.023277 21922 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0224 01:01:27.023346 21922 api_server.go:140] control plane version: v1.26.1
I0224 01:01:27.023360 21922 api_server.go:130] duration metric: took 6.106014ms to wait for apiserver health ...
I0224 01:01:27.023368 21922 system_pods.go:43] waiting for kube-system pods to appear ...
I0224 01:01:27.200772 21922 request.go:622] Waited for 177.339524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
I0224 01:01:27.200824 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
I0224 01:01:27.200829 21922 round_trippers.go:469] Request Headers:
I0224 01:01:27.200836 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:27.200843 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:27.204081 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:01:27.204100 21922 round_trippers.go:577] Response Headers:
I0224 01:01:27.204110 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:27.204118 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:27 GMT
I0224 01:01:27.204126 21922 round_trippers.go:580] Audit-Id: 136f9bae-f175-4e4a-832b-35513f14b820
I0224 01:01:27.204135 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:27.204146 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:27.204157 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:27.205322 21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54053 chars]
I0224 01:01:27.207203 21922 system_pods.go:59] 8 kube-system pods found
I0224 01:01:27.207231 21922 system_pods.go:61] "coredns-787d4945fb-xhwx9" [9d799d4f-0d4b-468e-85ad-052c1735e35c] Running
I0224 01:01:27.207237 21922 system_pods.go:61] "etcd-multinode-858631" [7b4b146b-12c8-4b3f-a682-8ab64a9135cb] Running
I0224 01:01:27.207242 21922 system_pods.go:61] "kindnet-cdxbx" [55b36f8b-ffbe-49b3-99fc-aea074319cd0] Running
I0224 01:01:27.207246 21922 system_pods.go:61] "kube-apiserver-multinode-858631" [ad778dac-86be-4c5e-8b3f-2afb354e374a] Running
I0224 01:01:27.207257 21922 system_pods.go:61] "kube-controller-manager-multinode-858631" [c1e4ec9e-a1e9-4f43-8b1b-95c797d33242] Running
I0224 01:01:27.207262 21922 system_pods.go:61] "kube-proxy-vlrn6" [ed1ab279-4267-4c3c-a68d-a729dc29f05b] Running
I0224 01:01:27.207266 21922 system_pods.go:61] "kube-scheduler-multinode-858631" [fcadaacc-9d90-4113-9bf9-b77ccbc47586] Running
I0224 01:01:27.207271 21922 system_pods.go:61] "storage-provisioner" [7ec578fe-05c4-4916-8db9-67ee112c136f] Running
I0224 01:01:27.207275 21922 system_pods.go:74] duration metric: took 183.902698ms to wait for pod list to return data ...
I0224 01:01:27.207282 21922 default_sa.go:34] waiting for default service account to be created ...
I0224 01:01:27.400613 21922 request.go:622] Waited for 193.275621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
I0224 01:01:27.400687 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
I0224 01:01:27.400695 21922 round_trippers.go:469] Request Headers:
I0224 01:01:27.400707 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:27.400725 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:27.403265 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:27.403289 21922 round_trippers.go:577] Response Headers:
I0224 01:01:27.403299 21922 round_trippers.go:580] Audit-Id: a8644e44-e42f-43e1-8f33-9f762e161490
I0224 01:01:27.403306 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:27.403314 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:27.403320 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:27.403331 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:27.403336 21922 round_trippers.go:580] Content-Length: 261
I0224 01:01:27.403344 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:27 GMT
I0224 01:01:27.403424 21922 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7b317ba6-8061-415e-bec5-8cdc2f9b9c04","resourceVersion":"316","creationTimestamp":"2023-02-24T01:01:11Z"}}]}
I0224 01:01:27.403608 21922 default_sa.go:45] found service account: "default"
I0224 01:01:27.403621 21922 default_sa.go:55] duration metric: took 196.334205ms for default service account to be created ...
I0224 01:01:27.403630 21922 system_pods.go:116] waiting for k8s-apps to be running ...
I0224 01:01:27.601068 21922 request.go:622] Waited for 197.377248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
I0224 01:01:27.601126 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
I0224 01:01:27.601131 21922 round_trippers.go:469] Request Headers:
I0224 01:01:27.601139 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:27.601145 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:27.605257 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:01:27.605278 21922 round_trippers.go:577] Response Headers:
I0224 01:01:27.605288 21922 round_trippers.go:580] Audit-Id: 754f7a46-5b3a-4878-bc23-1aa06e82181b
I0224 01:01:27.605303 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:27.605316 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:27.605328 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:27.605338 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:27.605348 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:27 GMT
I0224 01:01:27.606456 21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54053 chars]
I0224 01:01:27.608040 21922 system_pods.go:86] 8 kube-system pods found
I0224 01:01:27.608060 21922 system_pods.go:89] "coredns-787d4945fb-xhwx9" [9d799d4f-0d4b-468e-85ad-052c1735e35c] Running
I0224 01:01:27.608067 21922 system_pods.go:89] "etcd-multinode-858631" [7b4b146b-12c8-4b3f-a682-8ab64a9135cb] Running
I0224 01:01:27.608074 21922 system_pods.go:89] "kindnet-cdxbx" [55b36f8b-ffbe-49b3-99fc-aea074319cd0] Running
I0224 01:01:27.608080 21922 system_pods.go:89] "kube-apiserver-multinode-858631" [ad778dac-86be-4c5e-8b3f-2afb354e374a] Running
I0224 01:01:27.608088 21922 system_pods.go:89] "kube-controller-manager-multinode-858631" [c1e4ec9e-a1e9-4f43-8b1b-95c797d33242] Running
I0224 01:01:27.608095 21922 system_pods.go:89] "kube-proxy-vlrn6" [ed1ab279-4267-4c3c-a68d-a729dc29f05b] Running
I0224 01:01:27.608107 21922 system_pods.go:89] "kube-scheduler-multinode-858631" [fcadaacc-9d90-4113-9bf9-b77ccbc47586] Running
I0224 01:01:27.608118 21922 system_pods.go:89] "storage-provisioner" [7ec578fe-05c4-4916-8db9-67ee112c136f] Running
I0224 01:01:27.608128 21922 system_pods.go:126] duration metric: took 204.491636ms to wait for k8s-apps to be running ...
I0224 01:01:27.608143 21922 system_svc.go:44] waiting for kubelet service to be running ....
I0224 01:01:27.608191 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 01:01:27.622145 21922 system_svc.go:56] duration metric: took 13.995035ms WaitForService to wait for kubelet.
I0224 01:01:27.622165 21922 kubeadm.go:578] duration metric: took 14.862716251s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0224 01:01:27.622186 21922 node_conditions.go:102] verifying NodePressure condition ...
I0224 01:01:27.800523 21922 request.go:622] Waited for 178.264445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
I0224 01:01:27.800594 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
I0224 01:01:27.800607 21922 round_trippers.go:469] Request Headers:
I0224 01:01:27.800618 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:01:27.800631 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:01:27.803316 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:01:27.803338 21922 round_trippers.go:577] Response Headers:
I0224 01:01:27.803348 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:01:27.803356 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:01:27 GMT
I0224 01:01:27.803366 21922 round_trippers.go:580] Audit-Id: f760f57d-261d-4597-90e1-e9b04bed9639
I0224 01:01:27.803378 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:01:27.803389 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:01:27.803406 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:01:27.803846 21922 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5006 chars]
I0224 01:01:27.804193 21922 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0224 01:01:27.804215 21922 node_conditions.go:123] node cpu capacity is 2
I0224 01:01:27.804229 21922 node_conditions.go:105] duration metric: took 182.03459ms to run NodePressure ...
I0224 01:01:27.804243 21922 start.go:228] waiting for startup goroutines ...
I0224 01:01:27.804252 21922 start.go:233] waiting for cluster config update ...
I0224 01:01:27.804269 21922 start.go:242] writing updated cluster config ...
I0224 01:01:27.806739 21922 out.go:177]
I0224 01:01:27.808219 21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:01:27.808303 21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:01:27.809986 21922 out.go:177] * Starting worker node multinode-858631-m02 in cluster multinode-858631
I0224 01:01:27.811308 21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:01:27.811327 21922 cache.go:57] Caching tarball of preloaded images
I0224 01:01:27.811412 21922 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0224 01:01:27.811425 21922 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0224 01:01:27.811504 21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:01:27.811651 21922 cache.go:193] Successfully downloaded all kic artifacts
I0224 01:01:27.811677 21922 start.go:364] acquiring machines lock for multinode-858631-m02: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0224 01:01:27.811724 21922 start.go:368] acquired machines lock for "multinode-858631-m02" in 29.866µs
I0224 01:01:27.811747 21922 start.go:93] Provisioning new machine with config: &{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0224 01:01:27.811816 21922 start.go:125] createHost starting for "m02" (driver="kvm2")
I0224 01:01:27.813625 21922 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0224 01:01:27.813705 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:01:27.813734 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:01:27.827402 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
I0224 01:01:27.827827 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:01:27.828305 21922 main.go:141] libmachine: Using API Version 1
I0224 01:01:27.828323 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:01:27.828605 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:01:27.828776 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
I0224 01:01:27.828894 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:27.829028 21922 start.go:159] libmachine.API.Create for "multinode-858631" (driver="kvm2")
I0224 01:01:27.829057 21922 client.go:168] LocalClient.Create starting
I0224 01:01:27.829090 21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem
I0224 01:01:27.829123 21922 main.go:141] libmachine: Decoding PEM data...
I0224 01:01:27.829146 21922 main.go:141] libmachine: Parsing certificate...
I0224 01:01:27.829213 21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem
I0224 01:01:27.829239 21922 main.go:141] libmachine: Decoding PEM data...
I0224 01:01:27.829258 21922 main.go:141] libmachine: Parsing certificate...
I0224 01:01:27.829289 21922 main.go:141] libmachine: Running pre-create checks...
I0224 01:01:27.829301 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .PreCreateCheck
I0224 01:01:27.829458 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetConfigRaw
I0224 01:01:27.829801 21922 main.go:141] libmachine: Creating machine...
I0224 01:01:27.829817 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .Create
I0224 01:01:27.829928 21922 main.go:141] libmachine: (multinode-858631-m02) Creating KVM machine...
I0224 01:01:27.831074 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found existing default KVM network
I0224 01:01:27.831235 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found existing private KVM network mk-multinode-858631
I0224 01:01:27.831320 21922 main.go:141] libmachine: (multinode-858631-m02) Setting up store path in /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02 ...
I0224 01:01:27.831349 21922 main.go:141] libmachine: (multinode-858631-m02) Building disk image from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
I0224 01:01:27.831426 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:27.831318 22156 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15909-4074/.minikube
I0224 01:01:27.831522 21922 main.go:141] libmachine: (multinode-858631-m02) Downloading /home/jenkins/minikube-integration/15909-4074/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
I0224 01:01:28.023370 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:28.023232 22156 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa...
I0224 01:01:28.179934 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:28.179815 22156 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/multinode-858631-m02.rawdisk...
I0224 01:01:28.179982 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Writing magic tar header
I0224 01:01:28.179999 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Writing SSH key tar header
I0224 01:01:28.180013 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:28.179931 22156 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02 ...
I0224 01:01:28.180036 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02
I0224 01:01:28.180094 21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02 (perms=drwx------)
I0224 01:01:28.180109 21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines (perms=drwxrwxr-x)
I0224 01:01:28.180122 21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube (perms=drwxr-xr-x)
I0224 01:01:28.180137 21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074 (perms=drwxrwxr-x)
I0224 01:01:28.180153 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines
I0224 01:01:28.180167 21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0224 01:01:28.180182 21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0224 01:01:28.180190 21922 main.go:141] libmachine: (multinode-858631-m02) Creating domain...
I0224 01:01:28.180203 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube
I0224 01:01:28.180217 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074
I0224 01:01:28.180235 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0224 01:01:28.180249 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins
I0224 01:01:28.180262 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home
I0224 01:01:28.180279 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Skipping /home - not owner
I0224 01:01:28.182150 21922 main.go:141] libmachine: (multinode-858631-m02) define libvirt domain using xml:
I0224 01:01:28.182175 21922 main.go:141] libmachine: (multinode-858631-m02) <domain type='kvm'>
I0224 01:01:28.182217 21922 main.go:141] libmachine: (multinode-858631-m02) <name>multinode-858631-m02</name>
I0224 01:01:28.182240 21922 main.go:141] libmachine: (multinode-858631-m02) <memory unit='MiB'>2200</memory>
I0224 01:01:28.182253 21922 main.go:141] libmachine: (multinode-858631-m02) <vcpu>2</vcpu>
I0224 01:01:28.182266 21922 main.go:141] libmachine: (multinode-858631-m02) <features>
I0224 01:01:28.182279 21922 main.go:141] libmachine: (multinode-858631-m02) <acpi/>
I0224 01:01:28.182290 21922 main.go:141] libmachine: (multinode-858631-m02) <apic/>
I0224 01:01:28.182306 21922 main.go:141] libmachine: (multinode-858631-m02) <pae/>
I0224 01:01:28.182319 21922 main.go:141] libmachine: (multinode-858631-m02)
I0224 01:01:28.182345 21922 main.go:141] libmachine: (multinode-858631-m02) </features>
I0224 01:01:28.182359 21922 main.go:141] libmachine: (multinode-858631-m02) <cpu mode='host-passthrough'>
I0224 01:01:28.182368 21922 main.go:141] libmachine: (multinode-858631-m02)
I0224 01:01:28.182380 21922 main.go:141] libmachine: (multinode-858631-m02) </cpu>
I0224 01:01:28.182393 21922 main.go:141] libmachine: (multinode-858631-m02) <os>
I0224 01:01:28.182406 21922 main.go:141] libmachine: (multinode-858631-m02) <type>hvm</type>
I0224 01:01:28.182420 21922 main.go:141] libmachine: (multinode-858631-m02) <boot dev='cdrom'/>
I0224 01:01:28.182431 21922 main.go:141] libmachine: (multinode-858631-m02) <boot dev='hd'/>
I0224 01:01:28.182445 21922 main.go:141] libmachine: (multinode-858631-m02) <bootmenu enable='no'/>
I0224 01:01:28.182457 21922 main.go:141] libmachine: (multinode-858631-m02) </os>
I0224 01:01:28.182470 21922 main.go:141] libmachine: (multinode-858631-m02) <devices>
I0224 01:01:28.182483 21922 main.go:141] libmachine: (multinode-858631-m02) <disk type='file' device='cdrom'>
I0224 01:01:28.182501 21922 main.go:141] libmachine: (multinode-858631-m02) <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/boot2docker.iso'/>
I0224 01:01:28.182514 21922 main.go:141] libmachine: (multinode-858631-m02) <target dev='hdc' bus='scsi'/>
I0224 01:01:28.182528 21922 main.go:141] libmachine: (multinode-858631-m02) <readonly/>
I0224 01:01:28.182542 21922 main.go:141] libmachine: (multinode-858631-m02) </disk>
I0224 01:01:28.182558 21922 main.go:141] libmachine: (multinode-858631-m02) <disk type='file' device='disk'>
I0224 01:01:28.182572 21922 main.go:141] libmachine: (multinode-858631-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0224 01:01:28.182590 21922 main.go:141] libmachine: (multinode-858631-m02) <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/multinode-858631-m02.rawdisk'/>
I0224 01:01:28.182603 21922 main.go:141] libmachine: (multinode-858631-m02) <target dev='hda' bus='virtio'/>
I0224 01:01:28.182617 21922 main.go:141] libmachine: (multinode-858631-m02) </disk>
I0224 01:01:28.182630 21922 main.go:141] libmachine: (multinode-858631-m02) <interface type='network'>
I0224 01:01:28.182644 21922 main.go:141] libmachine: (multinode-858631-m02) <source network='mk-multinode-858631'/>
I0224 01:01:28.182657 21922 main.go:141] libmachine: (multinode-858631-m02) <model type='virtio'/>
I0224 01:01:28.182671 21922 main.go:141] libmachine: (multinode-858631-m02) </interface>
I0224 01:01:28.182683 21922 main.go:141] libmachine: (multinode-858631-m02) <interface type='network'>
I0224 01:01:28.182697 21922 main.go:141] libmachine: (multinode-858631-m02) <source network='default'/>
I0224 01:01:28.182709 21922 main.go:141] libmachine: (multinode-858631-m02) <model type='virtio'/>
I0224 01:01:28.182722 21922 main.go:141] libmachine: (multinode-858631-m02) </interface>
I0224 01:01:28.182735 21922 main.go:141] libmachine: (multinode-858631-m02) <serial type='pty'>
I0224 01:01:28.182748 21922 main.go:141] libmachine: (multinode-858631-m02) <target port='0'/>
I0224 01:01:28.182762 21922 main.go:141] libmachine: (multinode-858631-m02) </serial>
I0224 01:01:28.182776 21922 main.go:141] libmachine: (multinode-858631-m02) <console type='pty'>
I0224 01:01:28.182789 21922 main.go:141] libmachine: (multinode-858631-m02) <target type='serial' port='0'/>
I0224 01:01:28.182804 21922 main.go:141] libmachine: (multinode-858631-m02) </console>
I0224 01:01:28.182829 21922 main.go:141] libmachine: (multinode-858631-m02) <rng model='virtio'>
I0224 01:01:28.182845 21922 main.go:141] libmachine: (multinode-858631-m02) <backend model='random'>/dev/random</backend>
I0224 01:01:28.182857 21922 main.go:141] libmachine: (multinode-858631-m02) </rng>
I0224 01:01:28.182870 21922 main.go:141] libmachine: (multinode-858631-m02)
I0224 01:01:28.182881 21922 main.go:141] libmachine: (multinode-858631-m02)
I0224 01:01:28.182894 21922 main.go:141] libmachine: (multinode-858631-m02) </devices>
I0224 01:01:28.182906 21922 main.go:141] libmachine: (multinode-858631-m02) </domain>
I0224 01:01:28.182928 21922 main.go:141] libmachine: (multinode-858631-m02)
I0224 01:01:28.189990 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:dc:b5:5f in network default
I0224 01:01:28.190580 21922 main.go:141] libmachine: (multinode-858631-m02) Ensuring networks are active...
I0224 01:01:28.190608 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:28.191215 21922 main.go:141] libmachine: (multinode-858631-m02) Ensuring network default is active
I0224 01:01:28.191474 21922 main.go:141] libmachine: (multinode-858631-m02) Ensuring network mk-multinode-858631 is active
I0224 01:01:28.191828 21922 main.go:141] libmachine: (multinode-858631-m02) Getting domain xml...
I0224 01:01:28.192525 21922 main.go:141] libmachine: (multinode-858631-m02) Creating domain...
I0224 01:01:29.402197 21922 main.go:141] libmachine: (multinode-858631-m02) Waiting to get IP...
I0224 01:01:29.402891 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:29.403279 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:29.403334 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:29.403277 22156 retry.go:31] will retry after 277.751048ms: waiting for machine to come up
I0224 01:01:29.682862 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:29.683259 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:29.683291 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:29.683204 22156 retry.go:31] will retry after 237.567254ms: waiting for machine to come up
I0224 01:01:29.922625 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:29.923058 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:29.923089 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:29.922990 22156 retry.go:31] will retry after 445.26408ms: waiting for machine to come up
I0224 01:01:30.369421 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:30.369931 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:30.369961 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:30.369863 22156 retry.go:31] will retry after 368.046626ms: waiting for machine to come up
I0224 01:01:30.739335 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:30.739704 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:30.739744 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:30.739661 22156 retry.go:31] will retry after 678.03543ms: waiting for machine to come up
I0224 01:01:31.419348 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:31.419761 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:31.419790 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:31.419702 22156 retry.go:31] will retry after 740.078986ms: waiting for machine to come up
I0224 01:01:32.161606 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:32.162114 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:32.162144 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:32.162058 22156 retry.go:31] will retry after 1.178887374s: waiting for machine to come up
I0224 01:01:33.342862 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:33.343293 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:33.343315 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:33.343246 22156 retry.go:31] will retry after 1.221732807s: waiting for machine to come up
I0224 01:01:34.566725 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:34.567154 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:34.567178 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:34.567105 22156 retry.go:31] will retry after 1.636230736s: waiting for machine to come up
I0224 01:01:36.206068 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:36.206429 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:36.206457 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:36.206408 22156 retry.go:31] will retry after 2.225895186s: waiting for machine to come up
I0224 01:01:38.433607 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:38.434136 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:38.434175 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:38.434071 22156 retry.go:31] will retry after 1.749493158s: waiting for machine to come up
I0224 01:01:40.185273 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:40.185793 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:40.185835 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:40.185737 22156 retry.go:31] will retry after 3.620543501s: waiting for machine to come up
I0224 01:01:43.807940 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:43.808314 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:43.808341 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:43.808276 22156 retry.go:31] will retry after 2.729278179s: waiting for machine to come up
I0224 01:01:46.541068 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:46.541435 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
I0224 01:01:46.541457 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:46.541384 22156 retry.go:31] will retry after 3.976325501s: waiting for machine to come up
I0224 01:01:50.519773 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.520108 21922 main.go:141] libmachine: (multinode-858631-m02) Found IP for machine: 192.168.39.3
I0224 01:01:50.520123 21922 main.go:141] libmachine: (multinode-858631-m02) Reserving static IP address...
I0224 01:01:50.520133 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.520515 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find host DHCP lease matching {name: "multinode-858631-m02", mac: "52:54:00:14:f2:a2", ip: "192.168.39.3"} in network mk-multinode-858631
I0224 01:01:50.589344 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Getting to WaitForSSH function...
I0224 01:01:50.589371 21922 main.go:141] libmachine: (multinode-858631-m02) Reserved static IP address: 192.168.39.3
I0224 01:01:50.589385 21922 main.go:141] libmachine: (multinode-858631-m02) Waiting for SSH to be available...
I0224 01:01:50.592231 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.592702 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:50.592738 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.592848 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Using SSH client type: external
I0224 01:01:50.592877 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa (-rw-------)
I0224 01:01:50.592912 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0224 01:01:50.592928 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | About to run SSH command:
I0224 01:01:50.592945 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | exit 0
I0224 01:01:50.676906 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | SSH cmd err, output: <nil>:
I0224 01:01:50.677171 21922 main.go:141] libmachine: (multinode-858631-m02) KVM machine creation complete!
I0224 01:01:50.677418 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetConfigRaw
I0224 01:01:50.677919 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:50.678124 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:50.678275 21922 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0224 01:01:50.678292 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetState
I0224 01:01:50.679599 21922 main.go:141] libmachine: Detecting operating system of created instance...
I0224 01:01:50.679611 21922 main.go:141] libmachine: Waiting for SSH to be available...
I0224 01:01:50.679617 21922 main.go:141] libmachine: Getting to WaitForSSH function...
I0224 01:01:50.679624 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:50.681959 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.682294 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:50.682319 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.682503 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:50.682681 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:50.682831 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:50.682964 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:50.683121 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:50.683540 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:50.683553 21922 main.go:141] libmachine: About to run SSH command:
exit 0
I0224 01:01:50.788270 21922 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 01:01:50.788292 21922 main.go:141] libmachine: Detecting the provisioner...
I0224 01:01:50.788303 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:50.790952 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.791303 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:50.791339 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.791472 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:50.791655 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:50.791792 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:50.791932 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:50.792084 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:50.792473 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:50.792485 21922 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0224 01:01:50.901589 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g41e8300-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0224 01:01:50.901644 21922 main.go:141] libmachine: found compatible host: buildroot
I0224 01:01:50.901658 21922 main.go:141] libmachine: Provisioning with buildroot...
I0224 01:01:50.901672 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
I0224 01:01:50.901935 21922 buildroot.go:166] provisioning hostname "multinode-858631-m02"
I0224 01:01:50.901957 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
I0224 01:01:50.902138 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:50.904729 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.905065 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:50.905094 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:50.905236 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:50.905408 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:50.905589 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:50.905757 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:50.905943 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:50.906346 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:50.906364 21922 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-858631-m02 && echo "multinode-858631-m02" | sudo tee /etc/hostname
I0224 01:01:51.024628 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631-m02
I0224 01:01:51.024665 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:51.027088 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.027556 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.027583 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.027756 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:51.027917 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.028078 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.028175 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:51.028319 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:51.028778 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:51.028799 21922 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-858631-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-858631-m02' | sudo tee -a /etc/hosts;
fi
fi
I0224 01:01:51.145030 21922 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 01:01:51.145057 21922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
I0224 01:01:51.145069 21922 buildroot.go:174] setting up certificates
I0224 01:01:51.145076 21922 provision.go:83] configureAuth start
I0224 01:01:51.145084 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
I0224 01:01:51.145339 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
I0224 01:01:51.148109 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.148443 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.148471 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.148591 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:51.150795 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.151048 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.151089 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.151222 21922 provision.go:138] copyHostCerts
I0224 01:01:51.151252 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:01:51.151287 21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
I0224 01:01:51.151295 21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:01:51.151365 21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
I0224 01:01:51.151431 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:01:51.151448 21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
I0224 01:01:51.151454 21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:01:51.151475 21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
I0224 01:01:51.151514 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:01:51.151529 21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
I0224 01:01:51.151535 21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:01:51.151553 21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
I0224 01:01:51.151595 21922 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631-m02 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube multinode-858631-m02]
I0224 01:01:51.235724 21922 provision.go:172] copyRemoteCerts
I0224 01:01:51.235773 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 01:01:51.235792 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:51.238284 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.238619 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.238654 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.238793 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:51.238963 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.239115 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:51.239218 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
I0224 01:01:51.326431 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0224 01:01:51.326502 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0224 01:01:51.347801 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem -> /etc/docker/server.pem
I0224 01:01:51.347857 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0224 01:01:51.369122 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0224 01:01:51.369173 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0224 01:01:51.391247 21922 provision.go:86] duration metric: configureAuth took 246.161676ms
I0224 01:01:51.391272 21922 buildroot.go:189] setting minikube options for container-runtime
I0224 01:01:51.391462 21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:01:51.391495 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:51.391757 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:51.394377 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.394683 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.394708 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.394856 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:51.395024 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.395162 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.395281 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:51.395495 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:51.395904 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:51.395918 21922 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0224 01:01:51.502533 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0224 01:01:51.502555 21922 buildroot.go:70] root file system type: tmpfs
I0224 01:01:51.502674 21922 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 01:01:51.502695 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:51.505118 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.505450 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.505490 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.505633 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:51.505792 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.505923 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.506001 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:51.506103 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:51.506474 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:51.506533 21922 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.217"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 01:01:51.626613 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.217
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0224 01:01:51.626641 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:51.629332 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.629732 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:51.629762 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:51.630006 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:51.630174 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.630350 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:51.630504 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:51.630657 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:51.631035 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:51.631051 21922 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0224 01:01:52.326910 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0224 01:01:52.326936 21922 main.go:141] libmachine: Checking connection to Docker...
I0224 01:01:52.326947 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetURL
I0224 01:01:52.327935 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Using libvirt version 6000000
I0224 01:01:52.330148 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.330536 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.330561 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.330765 21922 main.go:141] libmachine: Docker is up and running!
I0224 01:01:52.330779 21922 main.go:141] libmachine: Reticulating splines...
I0224 01:01:52.330787 21922 client.go:171] LocalClient.Create took 24.50171783s
I0224 01:01:52.330804 21922 start.go:167] duration metric: libmachine.API.Create for "multinode-858631" took 24.501778325s
I0224 01:01:52.330812 21922 start.go:300] post-start starting for "multinode-858631-m02" (driver="kvm2")
I0224 01:01:52.330817 21922 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 01:01:52.330833 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:52.331079 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 01:01:52.331111 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:52.333582 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.333978 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.334004 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.334162 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:52.334335 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:52.334476 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:52.334605 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
I0224 01:01:52.417937 21922 ssh_runner.go:195] Run: cat /etc/os-release
I0224 01:01:52.422286 21922 command_runner.go:130] > NAME=Buildroot
I0224 01:01:52.422300 21922 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
I0224 01:01:52.422305 21922 command_runner.go:130] > ID=buildroot
I0224 01:01:52.422313 21922 command_runner.go:130] > VERSION_ID=2021.02.12
I0224 01:01:52.422324 21922 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0224 01:01:52.422355 21922 info.go:137] Remote host: Buildroot 2021.02.12
I0224 01:01:52.422373 21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
I0224 01:01:52.422434 21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
I0224 01:01:52.422508 21922 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
I0224 01:01:52.422517 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /etc/ssl/certs/111312.pem
I0224 01:01:52.422595 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 01:01:52.430179 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:01:52.453345 21922 start.go:303] post-start completed in 122.521368ms
I0224 01:01:52.453391 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetConfigRaw
I0224 01:01:52.453911 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
I0224 01:01:52.456385 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.456761 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.456784 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.457027 21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:01:52.457185 21922 start.go:128] duration metric: createHost completed in 24.645361196s
I0224 01:01:52.457206 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:52.459250 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.459680 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.459717 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.459867 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:52.460059 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:52.460219 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:52.460362 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:52.460539 21922 main.go:141] libmachine: Using SSH client type: native
I0224 01:01:52.460978 21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.3 22 <nil> <nil>}
I0224 01:01:52.460991 21922 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0224 01:01:52.569554 21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200512.542153926
I0224 01:01:52.569573 21922 fix.go:207] guest clock: 1677200512.542153926
I0224 01:01:52.569582 21922 fix.go:220] Guest: 2023-02-24 01:01:52.542153926 +0000 UTC Remote: 2023-02-24 01:01:52.457195612 +0000 UTC m=+104.572643378 (delta=84.958314ms)
I0224 01:01:52.569598 21922 fix.go:191] guest clock delta is within tolerance: 84.958314ms
I0224 01:01:52.569604 21922 start.go:83] releasing machines lock for "multinode-858631-m02", held for 24.757869416s
I0224 01:01:52.569628 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:52.569863 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
I0224 01:01:52.572193 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.572559 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.572588 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.574873 21922 out.go:177] * Found network options:
I0224 01:01:52.576032 21922 out.go:177] - NO_PROXY=192.168.39.217
W0224 01:01:52.577039 21922 proxy.go:119] fail to check proxy env: Error ip not in block
I0224 01:01:52.577081 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:52.577553 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:52.577725 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
I0224 01:01:52.577793 21922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 01:01:52.577823 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
W0224 01:01:52.577872 21922 proxy.go:119] fail to check proxy env: Error ip not in block
I0224 01:01:52.577916 21922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0224 01:01:52.577928 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
I0224 01:01:52.580299 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.580645 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.580674 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.580692 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.580817 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:52.580975 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:52.581117 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:01:52.581141 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:01:52.581141 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:52.581309 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
I0224 01:01:52.581302 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
I0224 01:01:52.581466 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
I0224 01:01:52.581619 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
I0224 01:01:52.581752 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
I0224 01:01:52.659795 21922 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0224 01:01:52.660067 21922 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0224 01:01:52.660131 21922 ssh_runner.go:195] Run: which cri-dockerd
I0224 01:01:52.686380 21922 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0224 01:01:52.686438 21922 command_runner.go:130] > /usr/bin/cri-dockerd
I0224 01:01:52.686563 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0224 01:01:52.694913 21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0224 01:01:52.710495 21922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 01:01:52.723985 21922 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0224 01:01:52.724048 21922 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0224 01:01:52.724063 21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:01:52.724141 21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:01:52.749723 21922 docker.go:630] Got preloaded images:
I0224 01:01:52.749743 21922 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
I0224 01:01:52.749788 21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0224 01:01:52.757997 21922 command_runner.go:139] > {"Repositories":{}}
I0224 01:01:52.758079 21922 ssh_runner.go:195] Run: which lz4
I0224 01:01:52.761357 21922 command_runner.go:130] > /usr/bin/lz4
I0224 01:01:52.761385 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0224 01:01:52.761454 21922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0224 01:01:52.765118 21922 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0224 01:01:52.765250 21922 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0224 01:01:52.765272 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
I0224 01:01:54.420974 21922 docker.go:594] Took 1.659537 seconds to copy over tarball
I0224 01:01:54.421035 21922 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0224 01:01:57.163251 21922 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.742191243s)
I0224 01:01:57.163279 21922 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0224 01:01:57.199728 21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0224 01:01:57.208743 21922 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.3":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.6-0":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c":"sha256:fce326961ae2d51a5f726883fd59d
2a8c2ccc3e45d3bb859882db58e422e59e7"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed0
3c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
I0224 01:01:57.208899 21922 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I0224 01:01:57.224814 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:01:57.327699 21922 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:01:59.963289 21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.635549978s)
I0224 01:01:59.963339 21922 start.go:485] detecting cgroup driver to use...
I0224 01:01:59.963437 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:01:59.986195 21922 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0224 01:01:59.986222 21922 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0224 01:01:59.986296 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0224 01:01:59.998783 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 01:02:00.007596 21922 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 01:02:00.007657 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 01:02:00.016595 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:02:00.025320 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 01:02:00.034095 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:02:00.042911 21922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 01:02:00.051732 21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 01:02:00.060301 21922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 01:02:00.067936 21922 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0224 01:02:00.067984 21922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 01:02:00.075579 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:02:00.171082 21922 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 01:02:00.188454 21922 start.go:485] detecting cgroup driver to use...
I0224 01:02:00.188522 21922 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0224 01:02:00.210571 21922 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0224 01:02:00.210594 21922 command_runner.go:130] > [Unit]
I0224 01:02:00.210603 21922 command_runner.go:130] > Description=Docker Application Container Engine
I0224 01:02:00.210611 21922 command_runner.go:130] > Documentation=https://docs.docker.com
I0224 01:02:00.210620 21922 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0224 01:02:00.210627 21922 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0224 01:02:00.210638 21922 command_runner.go:130] > StartLimitBurst=3
I0224 01:02:00.210649 21922 command_runner.go:130] > StartLimitIntervalSec=60
I0224 01:02:00.210657 21922 command_runner.go:130] > [Service]
I0224 01:02:00.210667 21922 command_runner.go:130] > Type=notify
I0224 01:02:00.210673 21922 command_runner.go:130] > Restart=on-failure
I0224 01:02:00.210684 21922 command_runner.go:130] > Environment=NO_PROXY=192.168.39.217
I0224 01:02:00.210695 21922 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0224 01:02:00.210707 21922 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0224 01:02:00.210715 21922 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0224 01:02:00.210724 21922 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0224 01:02:00.210730 21922 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0224 01:02:00.210739 21922 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0224 01:02:00.210746 21922 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0224 01:02:00.210758 21922 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0224 01:02:00.210766 21922 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0224 01:02:00.210772 21922 command_runner.go:130] > ExecStart=
I0224 01:02:00.210785 21922 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0224 01:02:00.210792 21922 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0224 01:02:00.210798 21922 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0224 01:02:00.210807 21922 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0224 01:02:00.210814 21922 command_runner.go:130] > LimitNOFILE=infinity
I0224 01:02:00.210822 21922 command_runner.go:130] > LimitNPROC=infinity
I0224 01:02:00.210826 21922 command_runner.go:130] > LimitCORE=infinity
I0224 01:02:00.210831 21922 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0224 01:02:00.210839 21922 command_runner.go:130] > # Only systemd 226 and above support this version.
I0224 01:02:00.210843 21922 command_runner.go:130] > TasksMax=infinity
I0224 01:02:00.210849 21922 command_runner.go:130] > TimeoutStartSec=0
I0224 01:02:00.210858 21922 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0224 01:02:00.210861 21922 command_runner.go:130] > Delegate=yes
I0224 01:02:00.210869 21922 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0224 01:02:00.210875 21922 command_runner.go:130] > KillMode=process
I0224 01:02:00.210884 21922 command_runner.go:130] > [Install]
I0224 01:02:00.210889 21922 command_runner.go:130] > WantedBy=multi-user.target
I0224 01:02:00.210945 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:02:00.223572 21922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0224 01:02:00.241615 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:02:00.253308 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:02:00.264535 21922 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 01:02:00.293560 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:02:00.305828 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:02:00.323783 21922 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0224 01:02:00.323805 21922 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0224 01:02:00.324209 21922 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0224 01:02:00.426533 21922 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0224 01:02:00.528862 21922 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0224 01:02:00.528899 21922 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0224 01:02:00.545354 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:02:00.646170 21922 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:02:01.991521 21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.345312876s)
I0224 01:02:01.991674 21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:02:02.095408 21922 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0224 01:02:02.202313 21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:02:02.308387 21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:02:02.413315 21922 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0224 01:02:02.428724 21922 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0224 01:02:02.428788 21922 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0224 01:02:02.433754 21922 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0224 01:02:02.433769 21922 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0224 01:02:02.433775 21922 command_runner.go:130] > Device: 16h/22d Inode: 982 Links: 1
I0224 01:02:02.433784 21922 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0224 01:02:02.433793 21922 command_runner.go:130] > Access: 2023-02-24 01:02:02.409160681 +0000
I0224 01:02:02.433801 21922 command_runner.go:130] > Modify: 2023-02-24 01:02:02.409160681 +0000
I0224 01:02:02.433812 21922 command_runner.go:130] > Change: 2023-02-24 01:02:02.411162454 +0000
I0224 01:02:02.433822 21922 command_runner.go:130] > Birth: -
I0224 01:02:02.433996 21922 start.go:553] Will wait 60s for crictl version
I0224 01:02:02.434045 21922 ssh_runner.go:195] Run: which crictl
I0224 01:02:02.437424 21922 command_runner.go:130] > /usr/bin/crictl
I0224 01:02:02.437483 21922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0224 01:02:02.538455 21922 command_runner.go:130] > Version: 0.1.0
I0224 01:02:02.538493 21922 command_runner.go:130] > RuntimeName: docker
I0224 01:02:02.538502 21922 command_runner.go:130] > RuntimeVersion: 20.10.23
I0224 01:02:02.538510 21922 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0224 01:02:02.538932 21922 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0224 01:02:02.538999 21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0224 01:02:02.570135 21922 command_runner.go:130] > 20.10.23
I0224 01:02:02.570215 21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0224 01:02:02.599099 21922 command_runner.go:130] > 20.10.23
I0224 01:02:02.602493 21922 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0224 01:02:02.604074 21922 out.go:177] - env NO_PROXY=192.168.39.217
I0224 01:02:02.605399 21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
I0224 01:02:02.608005 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:02:02.608335 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
I0224 01:02:02.608357 21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
I0224 01:02:02.608543 21922 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0224 01:02:02.612457 21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 01:02:02.624406 21922 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631 for IP: 192.168.39.3
I0224 01:02:02.624427 21922 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 01:02:02.624540 21922 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
I0224 01:02:02.624580 21922 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
I0224 01:02:02.624592 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0224 01:02:02.624605 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0224 01:02:02.624618 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0224 01:02:02.624631 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0224 01:02:02.624680 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem (1338 bytes)
W0224 01:02:02.624707 21922 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131_empty.pem, impossibly tiny 0 bytes
I0224 01:02:02.624717 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem (1679 bytes)
I0224 01:02:02.624755 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem (1078 bytes)
I0224 01:02:02.624782 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem (1123 bytes)
I0224 01:02:02.624804 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem (1679 bytes)
I0224 01:02:02.624841 21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:02:02.624866 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem -> /usr/share/ca-certificates/11131.pem
I0224 01:02:02.624883 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /usr/share/ca-certificates/111312.pem
I0224 01:02:02.624895 21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0224 01:02:02.625168 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 01:02:02.646667 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0224 01:02:02.668358 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0224 01:02:02.690086 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0224 01:02:02.714912 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem --> /usr/share/ca-certificates/11131.pem (1338 bytes)
I0224 01:02:02.738896 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /usr/share/ca-certificates/111312.pem (1708 bytes)
I0224 01:02:02.763215 21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0224 01:02:02.788917 21922 ssh_runner.go:195] Run: openssl version
I0224 01:02:02.794215 21922 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0224 01:02:02.794488 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 01:02:02.804509 21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0224 01:02:02.808922 21922 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
I0224 01:02:02.808946 21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
I0224 01:02:02.808985 21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 01:02:02.814327 21922 command_runner.go:130] > b5213941
I0224 01:02:02.814513 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 01:02:02.824104 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11131.pem && ln -fs /usr/share/ca-certificates/11131.pem /etc/ssl/certs/11131.pem"
I0224 01:02:02.833352 21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11131.pem
I0224 01:02:02.837742 21922 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
I0224 01:02:02.838118 21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
I0224 01:02:02.838153 21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11131.pem
I0224 01:02:02.843274 21922 command_runner.go:130] > 51391683
I0224 01:02:02.843309 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11131.pem /etc/ssl/certs/51391683.0"
I0224 01:02:02.852208 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111312.pem && ln -fs /usr/share/ca-certificates/111312.pem /etc/ssl/certs/111312.pem"
I0224 01:02:02.861585 21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111312.pem
I0224 01:02:02.865768 21922 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
I0224 01:02:02.866257 21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
I0224 01:02:02.866298 21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111312.pem
I0224 01:02:02.871503 21922 command_runner.go:130] > 3ec20f2e
I0224 01:02:02.871554 21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111312.pem /etc/ssl/certs/3ec20f2e.0"
I0224 01:02:02.880594 21922 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0224 01:02:02.914528 21922 command_runner.go:130] > cgroupfs
I0224 01:02:02.914576 21922 cni.go:84] Creating CNI manager for ""
I0224 01:02:02.914593 21922 cni.go:136] 2 nodes found, recommending kindnet
I0224 01:02:02.914611 21922 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0224 01:02:02.914637 21922 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-858631 NodeName:multinode-858631-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0224 01:02:02.914761 21922 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.3
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-858631-m02"
kubeletExtraArgs:
node-ip: 192.168.39.3
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0224 01:02:02.914827 21922 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-858631-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0224 01:02:02.914886 21922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0224 01:02:02.922875 21922 command_runner.go:130] > kubeadm
I0224 01:02:02.922889 21922 command_runner.go:130] > kubectl
I0224 01:02:02.922895 21922 command_runner.go:130] > kubelet
I0224 01:02:02.923167 21922 binaries.go:44] Found k8s binaries, skipping transfer
I0224 01:02:02.925191 21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0224 01:02:02.934316 21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0224 01:02:02.949693 21922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0224 01:02:02.964775 21922 ssh_runner.go:195] Run: grep 192.168.39.217 control-plane.minikube.internal$ /etc/hosts
I0224 01:02:02.968499 21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 01:02:02.980060 21922 host.go:66] Checking if "multinode-858631" exists ...
I0224 01:02:02.980294 21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:02:02.980413 21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:02:02.980465 21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:02:02.994585 21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33111
I0224 01:02:02.994927 21922 main.go:141] libmachine: () Calling .GetVersion
I0224 01:02:02.995351 21922 main.go:141] libmachine: Using API Version 1
I0224 01:02:02.995370 21922 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:02:02.995664 21922 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:02:02.995820 21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
I0224 01:02:02.995966 21922 start.go:301] JoinCluster: &{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 01:02:02.996049 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0224 01:02:02.996069 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
I0224 01:02:02.998945 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:02:02.999334 21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
I0224 01:02:02.999361 21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
I0224 01:02:02.999494 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
I0224 01:02:02.999654 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
I0224 01:02:02.999787 21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
I0224 01:02:02.999900 21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
I0224 01:02:03.187838 21922 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jpub3d.g9uycynvwqj91385 --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1
I0224 01:02:03.191840 21922 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0224 01:02:03.191878 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jpub3d.g9uycynvwqj91385 --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-858631-m02"
I0224 01:02:03.303186 21922 command_runner.go:130] > [preflight] Running pre-flight checks
I0224 01:02:03.554006 21922 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0224 01:02:03.554029 21922 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0224 01:02:03.590491 21922 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0224 01:02:03.590522 21922 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0224 01:02:03.590531 21922 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0224 01:02:03.698835 21922 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0224 01:02:05.219161 21922 command_runner.go:130] > This node has joined the cluster:
I0224 01:02:05.219190 21922 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0224 01:02:05.219200 21922 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0224 01:02:05.219210 21922 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0224 01:02:05.220747 21922 command_runner.go:130] ! W0224 01:02:03.287797 1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0224 01:02:05.220772 21922 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0224 01:02:05.221125 21922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jpub3d.g9uycynvwqj91385 --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-858631-m02": (2.029224613s)
I0224 01:02:05.221156 21922 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0224 01:02:05.458105 21922 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0224 01:02:05.458139 21922 start.go:303] JoinCluster complete in 2.462175128s
I0224 01:02:05.458149 21922 cni.go:84] Creating CNI manager for ""
I0224 01:02:05.458154 21922 cni.go:136] 2 nodes found, recommending kindnet
I0224 01:02:05.458194 21922 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0224 01:02:05.463696 21922 command_runner.go:130] > File: /opt/cni/bin/portmap
I0224 01:02:05.463724 21922 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0224 01:02:05.463733 21922 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0224 01:02:05.463744 21922 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0224 01:02:05.463752 21922 command_runner.go:130] > Access: 2023-02-24 01:00:20.396182736 +0000
I0224 01:02:05.463761 21922 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
I0224 01:02:05.463770 21922 command_runner.go:130] > Change: 2023-02-24 01:00:18.603182736 +0000
I0224 01:02:05.463773 21922 command_runner.go:130] > Birth: -
I0224 01:02:05.463863 21922 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0224 01:02:05.463878 21922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0224 01:02:05.480372 21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0224 01:02:05.758307 21922 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0224 01:02:05.758335 21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0224 01:02:05.758343 21922 command_runner.go:130] > serviceaccount/kindnet unchanged
I0224 01:02:05.758350 21922 command_runner.go:130] > daemonset.apps/kindnet configured
I0224 01:02:05.758773 21922 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:02:05.758991 21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 01:02:05.759246 21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0224 01:02:05.759256 21922 round_trippers.go:469] Request Headers:
I0224 01:02:05.759264 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:05.759270 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:05.760960 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:02:05.760980 21922 round_trippers.go:577] Response Headers:
I0224 01:02:05.760986 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:05.760992 21922 round_trippers.go:580] Content-Length: 291
I0224 01:02:05.760997 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:05 GMT
I0224 01:02:05.761003 21922 round_trippers.go:580] Audit-Id: b6753881-ac46-4058-9359-5b36abe09428
I0224 01:02:05.761009 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:05.761014 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:05.761020 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:05.761038 21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"416","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0224 01:02:05.761095 21922 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-858631" context rescaled to 1 replicas
I0224 01:02:05.761117 21922 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0224 01:02:05.763182 21922 out.go:177] * Verifying Kubernetes components...
I0224 01:02:05.764385 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 01:02:05.778613 21922 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-4074/kubeconfig
I0224 01:02:05.778913 21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 01:02:05.779259 21922 node_ready.go:35] waiting up to 6m0s for node "multinode-858631-m02" to be "Ready" ...
I0224 01:02:05.779327 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:05.779337 21922 round_trippers.go:469] Request Headers:
I0224 01:02:05.779350 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:05.779361 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:05.781214 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:02:05.781236 21922 round_trippers.go:577] Response Headers:
I0224 01:02:05.781247 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:05 GMT
I0224 01:02:05.781256 21922 round_trippers.go:580] Audit-Id: 8c29cb5b-ca99-4f8b-9b49-4db4f3341e6b
I0224 01:02:05.781265 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:05.781274 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:05.781286 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:05.781298 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:05.781435 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"476","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3987 chars]
I0224 01:02:06.282053 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:06.282073 21922 round_trippers.go:469] Request Headers:
I0224 01:02:06.282082 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:06.282092 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:06.286304 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:02:06.286327 21922 round_trippers.go:577] Response Headers:
I0224 01:02:06.286336 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:06.286342 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:06.286347 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:06.286353 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:06 GMT
I0224 01:02:06.286359 21922 round_trippers.go:580] Audit-Id: 93b5a00a-aa54-43cc-a4d7-0889df01d6d6
I0224 01:02:06.286364 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:06.286625 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:06.782050 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:06.782079 21922 round_trippers.go:469] Request Headers:
I0224 01:02:06.782090 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:06.782105 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:06.787083 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:02:06.787111 21922 round_trippers.go:577] Response Headers:
I0224 01:02:06.787122 21922 round_trippers.go:580] Audit-Id: 40a918e8-cef0-4ea6-b073-7e1e2ccd4bff
I0224 01:02:06.787131 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:06.787139 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:06.787148 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:06.787158 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:06.787164 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:06 GMT
I0224 01:02:06.787344 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:07.282686 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:07.282708 21922 round_trippers.go:469] Request Headers:
I0224 01:02:07.282717 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:07.282723 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:07.285275 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:07.285303 21922 round_trippers.go:577] Response Headers:
I0224 01:02:07.285314 21922 round_trippers.go:580] Audit-Id: 15f290ee-ba9a-4585-ae28-7c45df7dec0e
I0224 01:02:07.285323 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:07.285333 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:07.285342 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:07.285350 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:07.285359 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:07 GMT
I0224 01:02:07.285509 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:07.782037 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:07.782061 21922 round_trippers.go:469] Request Headers:
I0224 01:02:07.782075 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:07.782083 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:07.784401 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:07.784426 21922 round_trippers.go:577] Response Headers:
I0224 01:02:07.784436 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:07.784445 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:07.784454 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:07.784464 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:07 GMT
I0224 01:02:07.784472 21922 round_trippers.go:580] Audit-Id: 652b872b-7c19-4a9d-a726-7a63aeb72144
I0224 01:02:07.784489 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:07.784673 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:07.784932 21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
I0224 01:02:08.282034 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:08.282057 21922 round_trippers.go:469] Request Headers:
I0224 01:02:08.282069 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:08.282078 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:08.284448 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:08.284473 21922 round_trippers.go:577] Response Headers:
I0224 01:02:08.284483 21922 round_trippers.go:580] Audit-Id: 2f93f6ce-ce6e-4228-bb07-9e38a91554c7
I0224 01:02:08.284492 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:08.284506 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:08.284518 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:08.284525 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:08.284533 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:08 GMT
I0224 01:02:08.284701 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:08.782368 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:08.782395 21922 round_trippers.go:469] Request Headers:
I0224 01:02:08.782406 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:08.782413 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:08.785295 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:08.785315 21922 round_trippers.go:577] Response Headers:
I0224 01:02:08.785325 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:08.785334 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:08 GMT
I0224 01:02:08.785341 21922 round_trippers.go:580] Audit-Id: 7c93d20c-db45-4e4d-88cc-2d314e25e39a
I0224 01:02:08.785349 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:08.785358 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:08.785371 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:08.785926 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:09.282621 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:09.282649 21922 round_trippers.go:469] Request Headers:
I0224 01:02:09.282657 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:09.282663 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:09.285036 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:09.285061 21922 round_trippers.go:577] Response Headers:
I0224 01:02:09.285071 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:09 GMT
I0224 01:02:09.285081 21922 round_trippers.go:580] Audit-Id: 4ff62658-dc4c-4656-b29b-cc96e459d5dd
I0224 01:02:09.285090 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:09.285101 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:09.285114 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:09.285126 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:09.285264 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:09.783004 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:09.783028 21922 round_trippers.go:469] Request Headers:
I0224 01:02:09.783036 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:09.783042 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:09.785312 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:09.785328 21922 round_trippers.go:577] Response Headers:
I0224 01:02:09.785335 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:09.785341 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:09 GMT
I0224 01:02:09.785350 21922 round_trippers.go:580] Audit-Id: 80b8f0bd-af67-4e75-87f0-5f163521c4e7
I0224 01:02:09.785355 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:09.785368 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:09.785376 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:09.785685 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:09.785940 21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
I0224 01:02:10.282329 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:10.282355 21922 round_trippers.go:469] Request Headers:
I0224 01:02:10.282365 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:10.282374 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:10.284712 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:10.284730 21922 round_trippers.go:577] Response Headers:
I0224 01:02:10.284737 21922 round_trippers.go:580] Audit-Id: 5a7eadab-920a-4585-8976-f61660a5ae54
I0224 01:02:10.284743 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:10.284748 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:10.284754 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:10.284762 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:10.284770 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:10 GMT
I0224 01:02:10.285190 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:10.782911 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:10.782939 21922 round_trippers.go:469] Request Headers:
I0224 01:02:10.782951 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:10.782960 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:10.785111 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:10.785135 21922 round_trippers.go:577] Response Headers:
I0224 01:02:10.785147 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:10.785155 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:10.785163 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:10.785171 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:10 GMT
I0224 01:02:10.785179 21922 round_trippers.go:580] Audit-Id: 1be1ec50-92d5-4955-bbd8-dc15edf8cd74
I0224 01:02:10.785187 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:10.785609 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:11.282226 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:11.282254 21922 round_trippers.go:469] Request Headers:
I0224 01:02:11.282264 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:11.282272 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:11.284394 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:11.284415 21922 round_trippers.go:577] Response Headers:
I0224 01:02:11.284423 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:11.284429 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:11.284435 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:11.284440 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:11 GMT
I0224 01:02:11.284446 21922 round_trippers.go:580] Audit-Id: 7dd30b71-b225-4ba5-a6e4-bdc37eb82a93
I0224 01:02:11.284452 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:11.284557 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:11.782084 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:11.782107 21922 round_trippers.go:469] Request Headers:
I0224 01:02:11.782115 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:11.782122 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:11.784525 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:11.784548 21922 round_trippers.go:577] Response Headers:
I0224 01:02:11.784559 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:11 GMT
I0224 01:02:11.784567 21922 round_trippers.go:580] Audit-Id: f304a726-4324-4302-ad9c-d5b091415fea
I0224 01:02:11.784576 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:11.784584 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:11.784596 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:11.784605 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:11.784807 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:12.282842 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:12.282870 21922 round_trippers.go:469] Request Headers:
I0224 01:02:12.282879 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:12.282885 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:12.285157 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:12.285180 21922 round_trippers.go:577] Response Headers:
I0224 01:02:12.285190 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:12.285199 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:12.285207 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:12.285216 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:12.285228 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:12 GMT
I0224 01:02:12.285238 21922 round_trippers.go:580] Audit-Id: dc7c542e-1450-465c-abb6-40ba4f5772b0
I0224 01:02:12.285493 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:12.285874 21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
I0224 01:02:12.782696 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:12.782716 21922 round_trippers.go:469] Request Headers:
I0224 01:02:12.782724 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:12.782730 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:12.785264 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:12.785288 21922 round_trippers.go:577] Response Headers:
I0224 01:02:12.785298 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:12.785307 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:12 GMT
I0224 01:02:12.785315 21922 round_trippers.go:580] Audit-Id: 73f7401d-1343-4018-add1-f0e611b621ca
I0224 01:02:12.785328 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:12.785340 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:12.785348 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:12.785558 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:13.282993 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:13.283026 21922 round_trippers.go:469] Request Headers:
I0224 01:02:13.283038 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:13.283048 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:13.285725 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:13.285743 21922 round_trippers.go:577] Response Headers:
I0224 01:02:13.285753 21922 round_trippers.go:580] Audit-Id: a32c6ee2-8f25-43b6-82ab-80f87c8f8d46
I0224 01:02:13.285759 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:13.285764 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:13.285769 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:13.285776 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:13.285785 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:13 GMT
I0224 01:02:13.286231 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:13.782956 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:13.782983 21922 round_trippers.go:469] Request Headers:
I0224 01:02:13.782996 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:13.783006 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:13.785805 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:13.785835 21922 round_trippers.go:577] Response Headers:
I0224 01:02:13.785842 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:13.785847 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:13.785855 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:13.785860 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:13 GMT
I0224 01:02:13.785866 21922 round_trippers.go:580] Audit-Id: 2603e4e6-fdf1-45ef-88af-cfa11296d9b7
I0224 01:02:13.785875 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:13.786049 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:14.282694 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:14.282721 21922 round_trippers.go:469] Request Headers:
I0224 01:02:14.282732 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:14.282739 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:14.285702 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:14.285726 21922 round_trippers.go:577] Response Headers:
I0224 01:02:14.285736 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:14.285746 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:14 GMT
I0224 01:02:14.285754 21922 round_trippers.go:580] Audit-Id: 55557df8-6a0c-45aa-b1af-a5e1a0a0278c
I0224 01:02:14.285764 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:14.285772 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:14.285787 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:14.285903 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
I0224 01:02:14.286244 21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
I0224 01:02:14.782677 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:14.782710 21922 round_trippers.go:469] Request Headers:
I0224 01:02:14.782723 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:14.782733 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:14.786057 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:02:14.786079 21922 round_trippers.go:577] Response Headers:
I0224 01:02:14.786090 21922 round_trippers.go:580] Audit-Id: 612cef84-f190-4adf-ad57-d39992e3c8a6
I0224 01:02:14.786098 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:14.786107 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:14.786116 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:14.786134 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:14.786143 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:14 GMT
I0224 01:02:14.786222 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
I0224 01:02:15.282788 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:15.282810 21922 round_trippers.go:469] Request Headers:
I0224 01:02:15.282819 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:15.282825 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:15.285349 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:15.285363 21922 round_trippers.go:577] Response Headers:
I0224 01:02:15.285370 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:15.285377 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:15 GMT
I0224 01:02:15.285390 21922 round_trippers.go:580] Audit-Id: aae233cb-3e01-4ca5-854c-98eef75f4982
I0224 01:02:15.285404 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:15.285416 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:15.285427 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:15.285677 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
I0224 01:02:15.782340 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:15.782363 21922 round_trippers.go:469] Request Headers:
I0224 01:02:15.782371 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:15.782377 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:15.784840 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:15.784864 21922 round_trippers.go:577] Response Headers:
I0224 01:02:15.784875 21922 round_trippers.go:580] Audit-Id: 3197293c-c98c-4d57-8f8b-4db53f94813c
I0224 01:02:15.784884 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:15.784891 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:15.784897 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:15.784904 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:15.784910 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:15 GMT
I0224 01:02:15.785575 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
I0224 01:02:16.282165 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:16.282197 21922 round_trippers.go:469] Request Headers:
I0224 01:02:16.282206 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:16.282212 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:16.284645 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:16.284669 21922 round_trippers.go:577] Response Headers:
I0224 01:02:16.284679 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:16 GMT
I0224 01:02:16.284690 21922 round_trippers.go:580] Audit-Id: cedcf71b-4e25-4d70-a91f-0054be8450a3
I0224 01:02:16.284702 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:16.284710 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:16.284719 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:16.284728 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:16.284890 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
I0224 01:02:16.782504 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:16.782529 21922 round_trippers.go:469] Request Headers:
I0224 01:02:16.782549 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:16.782556 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:16.785126 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:16.785150 21922 round_trippers.go:577] Response Headers:
I0224 01:02:16.785159 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:16.785167 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:16 GMT
I0224 01:02:16.785174 21922 round_trippers.go:580] Audit-Id: 5e75f94f-2e8f-4d17-bf56-659cfbd413d1
I0224 01:02:16.785181 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:16.785189 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:16.785197 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:16.785463 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
I0224 01:02:16.785728 21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
I0224 01:02:17.282739 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:17.282771 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.282783 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.282793 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.285133 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.285155 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.285162 21922 round_trippers.go:580] Audit-Id: 9d749906-ff8f-4edb-95dc-af80d4a9dc8b
I0224 01:02:17.285168 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.285174 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.285179 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.285185 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.285190 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.285510 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
I0224 01:02:17.782150 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:17.782171 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.782179 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.782186 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.785063 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.785082 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.785089 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.785095 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.785100 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.785106 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.785111 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.785123 21922 round_trippers.go:580] Audit-Id: e8e42f89-79fa-4bed-80ba-b62b6eb17a9c
I0224 01:02:17.785438 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"507","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4131 chars]
I0224 01:02:17.785694 21922 node_ready.go:49] node "multinode-858631-m02" has status "Ready":"True"
I0224 01:02:17.785715 21922 node_ready.go:38] duration metric: took 12.006435326s waiting for node "multinode-858631-m02" to be "Ready" ...
I0224 01:02:17.785727 21922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 01:02:17.785783 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
I0224 01:02:17.785791 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.785798 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.785808 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.789843 21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0224 01:02:17.789865 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.789876 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.789885 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.789893 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.789905 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.789912 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.789921 21922 round_trippers.go:580] Audit-Id: 8b6bcfe1-d72c-4886-bb04-bb9b256b5aef
I0224 01:02:17.791583 21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"507"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67422 chars]
I0224 01:02:17.793521 21922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.793582 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
I0224 01:02:17.793593 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.793600 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.793609 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.799554 21922 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0224 01:02:17.799572 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.799579 21922 round_trippers.go:580] Audit-Id: 46d3468f-9227-4b5a-a90c-cd831810d0db
I0224 01:02:17.799585 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.799592 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.799601 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.799618 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.799627 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.799797 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
I0224 01:02:17.800303 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:17.800319 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.800329 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.800344 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.802385 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.802401 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.802411 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.802419 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.802427 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.802440 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.802450 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.802462 21922 round_trippers.go:580] Audit-Id: b6dae04e-a272-405b-b624-fe25000cc924
I0224 01:02:17.802717 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
I0224 01:02:17.803072 21922 pod_ready.go:92] pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:17.803085 21922 pod_ready.go:81] duration metric: took 9.54594ms waiting for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.803095 21922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.803146 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-858631
I0224 01:02:17.803156 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.803163 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.803172 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.805776 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.805793 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.805803 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.805810 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.805827 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.805840 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.805854 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.805869 21922 round_trippers.go:580] Audit-Id: f8cf8076-d6d3-4ab4-832e-1152316006db
I0224 01:02:17.806008 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-858631","namespace":"kube-system","uid":"7b4b146b-12c8-4b3f-a682-8ab64a9135cb","resourceVersion":"276","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.mirror":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.seen":"2023-02-24T01:00:59.730785607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5856 chars]
I0224 01:02:17.806426 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:17.806440 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.806451 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.806464 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.807966 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:02:17.807981 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.807991 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.808007 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.808020 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.808029 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.808042 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.808055 21922 round_trippers.go:580] Audit-Id: 458c5759-0a1f-4fa0-b545-c6e2bb2aafdb
I0224 01:02:17.808215 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
I0224 01:02:17.808460 21922 pod_ready.go:92] pod "etcd-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:17.808472 21922 pod_ready.go:81] duration metric: took 5.368739ms waiting for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.808491 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.808541 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-858631
I0224 01:02:17.808551 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.808562 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.808576 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.811060 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.811079 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.811088 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.811097 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.811110 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.811120 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.811133 21922 round_trippers.go:580] Audit-Id: 4c81b437-0719-49a3-8ac1-9d49b1ee705b
I0224 01:02:17.811144 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.811278 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-858631","namespace":"kube-system","uid":"ad778dac-86be-4c5e-8b3f-2afb354e374a","resourceVersion":"299","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.mirror":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.seen":"2023-02-24T01:00:59.730814539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
I0224 01:02:17.811585 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:17.811597 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.811607 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.811619 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.813796 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.813814 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.813823 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.813832 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.813841 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.813853 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.813865 21922 round_trippers.go:580] Audit-Id: cc6448e0-1cbd-452b-b1a3-54709f329fa1
I0224 01:02:17.813884 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.813984 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
I0224 01:02:17.814228 21922 pod_ready.go:92] pod "kube-apiserver-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:17.814239 21922 pod_ready.go:81] duration metric: took 5.738671ms waiting for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.814249 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.814286 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-858631
I0224 01:02:17.814295 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.814305 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.814316 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.815755 21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0224 01:02:17.815772 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.815781 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.815790 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.815805 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.815814 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.815825 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.815833 21922 round_trippers.go:580] Audit-Id: fa46faeb-e840-4d29-93f1-01d3abcac42b
I0224 01:02:17.815953 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-858631","namespace":"kube-system","uid":"c1e4ec9e-a1e9-4f43-8b1b-95c797d33242","resourceVersion":"272","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.mirror":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.seen":"2023-02-24T01:00:59.730815908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6957 chars]
I0224 01:02:17.816252 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:17.816264 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.816275 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.816285 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.818448 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:17.818466 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.818476 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.818492 21922 round_trippers.go:580] Audit-Id: b6ab3b64-242f-4cbe-b61a-4c2c449d202b
I0224 01:02:17.818504 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.818512 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.818524 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.818536 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.818635 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
I0224 01:02:17.818868 21922 pod_ready.go:92] pod "kube-controller-manager-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:17.818878 21922 pod_ready.go:81] duration metric: took 4.622614ms waiting for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.818889 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
I0224 01:02:17.982775 21922 request.go:622] Waited for 163.835849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vlrn6
I0224 01:02:17.982833 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vlrn6
I0224 01:02:17.982838 21922 round_trippers.go:469] Request Headers:
I0224 01:02:17.982846 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:17.982852 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:17.986506 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:02:17.986524 21922 round_trippers.go:577] Response Headers:
I0224 01:02:17.986531 21922 round_trippers.go:580] Audit-Id: 1136dba7-3b62-4a56-aa8a-9ab2da34bd7b
I0224 01:02:17.986537 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:17.986543 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:17.986550 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:17.986561 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:17.986571 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:17 GMT
I0224 01:02:17.986698 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vlrn6","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed1ab279-4267-4c3c-a68d-a729dc29f05b","resourceVersion":"367","creationTimestamp":"2023-02-24T01:01:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ec6a9ff-44a2-44e8-9e3b-270212238f31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec6a9ff-44a2-44e8-9e3b-270212238f31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0224 01:02:18.182818 21922 request.go:622] Waited for 195.711259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:18.182864 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:18.182869 21922 round_trippers.go:469] Request Headers:
I0224 01:02:18.182877 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:18.182883 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:18.185313 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:18.185332 21922 round_trippers.go:577] Response Headers:
I0224 01:02:18.185339 21922 round_trippers.go:580] Audit-Id: ef86a469-d957-4cc8-893b-98dbce25c375
I0224 01:02:18.185345 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:18.185351 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:18.185356 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:18.185362 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:18.185367 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:18 GMT
I0224 01:02:18.185572 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
I0224 01:02:18.185972 21922 pod_ready.go:92] pod "kube-proxy-vlrn6" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:18.185987 21922 pod_ready.go:81] duration metric: took 367.092131ms waiting for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
I0224 01:02:18.185999 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrvgw" in "kube-system" namespace to be "Ready" ...
I0224 01:02:18.382990 21922 request.go:622] Waited for 196.924858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgw
I0224 01:02:18.383063 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgw
I0224 01:02:18.383069 21922 round_trippers.go:469] Request Headers:
I0224 01:02:18.383080 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:18.383102 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:18.385775 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:18.385798 21922 round_trippers.go:577] Response Headers:
I0224 01:02:18.385808 21922 round_trippers.go:580] Audit-Id: f8e8bc7d-14a9-433a-ad47-8581e9ac35be
I0224 01:02:18.385829 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:18.385838 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:18.385850 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:18.385863 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:18.385879 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:18 GMT
I0224 01:02:18.386590 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wrvgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"1b634754-3905-4781-b367-af19b8dd4e3d","resourceVersion":"491","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ec6a9ff-44a2-44e8-9e3b-270212238f31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec6a9ff-44a2-44e8-9e3b-270212238f31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0224 01:02:18.582481 21922 request.go:622] Waited for 195.368039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:18.582533 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
I0224 01:02:18.582538 21922 round_trippers.go:469] Request Headers:
I0224 01:02:18.582556 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:18.582565 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:18.584860 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:18.584878 21922 round_trippers.go:577] Response Headers:
I0224 01:02:18.584884 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:18.584890 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:18.584898 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:18.584907 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:18 GMT
I0224 01:02:18.584916 21922 round_trippers.go:580] Audit-Id: 34bd7715-f9f7-43af-b872-b6cb187fbd72
I0224 01:02:18.584925 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:18.585047 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"507","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4131 chars]
I0224 01:02:18.585411 21922 pod_ready.go:92] pod "kube-proxy-wrvgw" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:18.585428 21922 pod_ready.go:81] duration metric: took 399.421408ms waiting for pod "kube-proxy-wrvgw" in "kube-system" namespace to be "Ready" ...
I0224 01:02:18.585440 21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:18.782907 21922 request.go:622] Waited for 197.411303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
I0224 01:02:18.782960 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
I0224 01:02:18.782964 21922 round_trippers.go:469] Request Headers:
I0224 01:02:18.782979 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:18.782988 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:18.786814 21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0224 01:02:18.786841 21922 round_trippers.go:577] Response Headers:
I0224 01:02:18.786853 21922 round_trippers.go:580] Audit-Id: 8c52f398-49c4-4043-8b97-f9250c82333f
I0224 01:02:18.786862 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:18.786870 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:18.786879 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:18.786887 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:18.786896 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:18 GMT
I0224 01:02:18.787246 21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-858631","namespace":"kube-system","uid":"fcadaacc-9d90-4113-9bf9-b77ccbc47586","resourceVersion":"294","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.mirror":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.seen":"2023-02-24T01:00:59.730816890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4687 chars]
I0224 01:02:18.983025 21922 request.go:622] Waited for 195.394792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:18.983083 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
I0224 01:02:18.983089 21922 round_trippers.go:469] Request Headers:
I0224 01:02:18.983099 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:18.983108 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:18.985643 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:18.985671 21922 round_trippers.go:577] Response Headers:
I0224 01:02:18.985682 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:18.985691 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:18.985703 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:18 GMT
I0224 01:02:18.985716 21922 round_trippers.go:580] Audit-Id: 0b617519-9b21-410f-8e96-32f6b761a6a0
I0224 01:02:18.985728 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:18.985745 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:18.985936 21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
I0224 01:02:18.986380 21922 pod_ready.go:92] pod "kube-scheduler-multinode-858631" in "kube-system" namespace has status "Ready":"True"
I0224 01:02:18.986402 21922 pod_ready.go:81] duration metric: took 400.954409ms waiting for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
I0224 01:02:18.986418 21922 pod_ready.go:38] duration metric: took 1.200674232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 01:02:18.986441 21922 system_svc.go:44] waiting for kubelet service to be running ....
I0224 01:02:18.986490 21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 01:02:19.000491 21922 system_svc.go:56] duration metric: took 14.043504ms WaitForService to wait for kubelet.
I0224 01:02:19.000516 21922 kubeadm.go:578] duration metric: took 13.239381651s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0224 01:02:19.000542 21922 node_conditions.go:102] verifying NodePressure condition ...
I0224 01:02:19.182971 21922 request.go:622] Waited for 182.359111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
I0224 01:02:19.183041 21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
I0224 01:02:19.183051 21922 round_trippers.go:469] Request Headers:
I0224 01:02:19.183065 21922 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0224 01:02:19.183078 21922 round_trippers.go:473] Accept: application/json, */*
I0224 01:02:19.185880 21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0224 01:02:19.185904 21922 round_trippers.go:577] Response Headers:
I0224 01:02:19.185912 21922 round_trippers.go:580] Content-Type: application/json
I0224 01:02:19.185918 21922 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
I0224 01:02:19.185923 21922 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
I0224 01:02:19.185929 21922 round_trippers.go:580] Date: Fri, 24 Feb 2023 01:02:19 GMT
I0224 01:02:19.185935 21922 round_trippers.go:580] Audit-Id: d3a7e02b-d364-4066-af5d-43f8fc3d19a1
I0224 01:02:19.185948 21922 round_trippers.go:580] Cache-Control: no-cache, private
I0224 01:02:19.186293 21922 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10291 chars]
I0224 01:02:19.186775 21922 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0224 01:02:19.186792 21922 node_conditions.go:123] node cpu capacity is 2
I0224 01:02:19.186801 21922 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0224 01:02:19.186807 21922 node_conditions.go:123] node cpu capacity is 2
I0224 01:02:19.186823 21922 node_conditions.go:105] duration metric: took 186.275755ms to run NodePressure ...
I0224 01:02:19.186836 21922 start.go:228] waiting for startup goroutines ...
I0224 01:02:19.186861 21922 start.go:242] writing updated cluster config ...
I0224 01:02:19.187143 21922 ssh_runner.go:195] Run: rm -f paused
I0224 01:02:19.236616 21922 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
I0224 01:02:19.239167 21922 out.go:177] * Done! kubectl is now configured to use "multinode-858631" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Fri 2023-02-24 01:00:19 UTC, ends at Fri 2023-02-24 01:03:51 UTC. --
Feb 24 01:01:19 multinode-858631 dockerd[971]: time="2023-02-24T01:01:19.151874365Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ebafedff9cf6c1fa9d9f3ff5d68acb3bb78ff26a73d0566344ce0badc8c3958e pid=4737 runtime=io.containerd.runc.v2
Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.563170227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.563665339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.563732396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.564250584Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/69850c46e8835d3ecea60364f44420c7bc5fc0d2fb9cce1e77292520a6704954 pid=5334 runtime=io.containerd.runc.v2
Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.029971166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.030013919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.030023180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.030470862Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/789f0dcee13fe65f35246d61a8e20169e3f809bdf16534c169f1d396a8c87a45 pid=5374 runtime=io.containerd.runc.v2
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.038254723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.038337994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.038349351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.041426903Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/16cccc3389964c69e4d80432441960c0308efe13b6105bc10ffbb73ab2cb03ef pid=5421 runtime=io.containerd.runc.v2
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.621713957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.621950548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.621962934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.622519588Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/891f0c08a14f74d64b797ffdbd7bacd02c74123a5e93fedf501d88b5168855f3 pid=5509 runtime=io.containerd.runc.v2
Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.387973381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.388493762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.388653008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.389029743Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/18d4897e34fa98abde0b25295106273c2f2ae532a4b0b82c55369cb706b48759 pid=6155 runtime=io.containerd.runc.v2
Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045153538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045368227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045386353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045692276Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/56e8dbc10ef7074cba39c2add7f494709d957afb08b98cbeae360fce25491229 pid=6258 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
56e8dbc10ef70 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 About a minute ago Running busybox 0 18d4897e34fa9
891f0c08a14f7 5185b96f0becf 2 minutes ago Running coredns 0 16cccc3389964
789f0dcee13fe 6e38f40d628db 2 minutes ago Running storage-provisioner 0 69850c46e8835
ebafedff9cf6c kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe 2 minutes ago Running kindnet-cni 0 34babfb41e50e
d7b643bcdb886 46a6bb3c77ce0 2 minutes ago Running kube-proxy 0 33bc4ce6f85a9
520690a3de270 fce326961ae2d 2 minutes ago Running etcd 0 11e92749f86d2
eaf66574e9cb6 655493523f607 2 minutes ago Running kube-scheduler 0 9606ff8ef5f99
fe09023de51d1 deb04688c4a35 2 minutes ago Running kube-apiserver 0 43df63791385e
1236d9f622921 e9c08e11b07f6 3 minutes ago Running kube-controller-manager 0 f72fb4f6682a7
*
* ==> coredns [891f0c08a14f] <==
* [INFO] 10.244.1.2:57404 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179108s
[INFO] 10.244.0.3:56962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204166s
[INFO] 10.244.0.3:32914 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001956131s
[INFO] 10.244.0.3:60276 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177058s
[INFO] 10.244.0.3:37837 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092369s
[INFO] 10.244.0.3:50061 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001304911s
[INFO] 10.244.0.3:37475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162047s
[INFO] 10.244.0.3:45734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103713s
[INFO] 10.244.0.3:52349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147778s
[INFO] 10.244.1.2:57924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190014s
[INFO] 10.244.1.2:34536 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016461s
[INFO] 10.244.1.2:44915 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093146s
[INFO] 10.244.1.2:43414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159306s
[INFO] 10.244.0.3:41918 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096715s
[INFO] 10.244.0.3:37576 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162934s
[INFO] 10.244.0.3:45202 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051871s
[INFO] 10.244.0.3:35260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085738s
[INFO] 10.244.1.2:55209 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139804s
[INFO] 10.244.1.2:35152 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000293214s
[INFO] 10.244.1.2:37368 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196166s
[INFO] 10.244.1.2:38431 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000278777s
[INFO] 10.244.0.3:42198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151079s
[INFO] 10.244.0.3:36646 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063033s
[INFO] 10.244.0.3:41803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008872s
[INFO] 10.244.0.3:47687 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076956s
*
* ==> describe nodes <==
* Name: multinode-858631
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-858631
kubernetes.io/os=linux
minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
minikube.k8s.io/name=multinode-858631
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_24T01_01_00_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 24 Feb 2023 01:00:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-858631
AcquireTime: <unset>
RenewTime: Fri, 24 Feb 2023 01:03:43 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 24 Feb 2023 01:02:31 +0000 Fri, 24 Feb 2023 01:00:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 24 Feb 2023 01:02:31 +0000 Fri, 24 Feb 2023 01:00:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 24 Feb 2023 01:02:31 +0000 Fri, 24 Feb 2023 01:00:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 24 Feb 2023 01:02:31 +0000 Fri, 24 Feb 2023 01:01:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.217
Hostname: multinode-858631
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 8133424a1c7c4d738ea9a601f818107b
System UUID: 8133424a-1c7c-4d73-8ea9-a601f818107b
Boot ID: 705d2688-47a8-48c7-bcc6-909f1595be50
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-pmnbg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system coredns-787d4945fb-xhwx9 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 2m39s
kube-system etcd-multinode-858631 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 2m51s
kube-system kindnet-cdxbx 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 2m40s
kube-system kube-apiserver-multinode-858631 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m51s
kube-system kube-controller-manager-multinode-858631 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m51s
kube-system kube-proxy-vlrn6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m40s
kube-system kube-scheduler-multinode-858631 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m51s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m38s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m38s kube-proxy
Normal NodeHasSufficientMemory 3m3s (x5 over 3m3s) kubelet Node multinode-858631 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m3s (x5 over 3m3s) kubelet Node multinode-858631 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m3s (x5 over 3m3s) kubelet Node multinode-858631 status is now: NodeHasSufficientPID
Normal Starting 2m52s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m52s kubelet Node multinode-858631 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m52s kubelet Node multinode-858631 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m52s kubelet Node multinode-858631 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m52s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m40s node-controller Node multinode-858631 event: Registered Node multinode-858631 in Controller
Normal NodeReady 2m28s kubelet Node multinode-858631 status is now: NodeReady
Name: multinode-858631-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-858631-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 24 Feb 2023 01:02:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-858631-m02
AcquireTime: <unset>
RenewTime: Fri, 24 Feb 2023 01:03:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 24 Feb 2023 01:02:34 +0000 Fri, 24 Feb 2023 01:02:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 24 Feb 2023 01:02:34 +0000 Fri, 24 Feb 2023 01:02:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 24 Feb 2023 01:02:34 +0000 Fri, 24 Feb 2023 01:02:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 24 Feb 2023 01:02:34 +0000 Fri, 24 Feb 2023 01:02:17 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.3
Hostname: multinode-858631-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: c880124de1874ee0a51f88b5a4f1ece3
System UUID: c880124d-e187-4ee0-a51f-88b5a4f1ece3
Boot ID: 45d06bec-2c61-42ea-bc55-ed9d3f47ea39
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-bkl2m 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 93s
kube-system kindnet-hhfkf 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 108s
kube-system kube-proxy-wrvgw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 108s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 104s kube-proxy
Normal Starting 109s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 108s (x2 over 108s) kubelet Node multinode-858631-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 108s (x2 over 108s) kubelet Node multinode-858631-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 108s (x2 over 108s) kubelet Node multinode-858631-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 108s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 106s node-controller Node multinode-858631-m02 event: Registered Node multinode-858631-m02 in Controller
Normal NodeReady 95s kubelet Node multinode-858631-m02 status is now: NodeReady
Name: multinode-858631-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-858631-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 24 Feb 2023 01:03:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-858631-m03
AcquireTime: <unset>
RenewTime: Fri, 24 Feb 2023 01:03:24 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 24 Feb 2023 01:03:18 +0000 Fri, 24 Feb 2023 01:03:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 24 Feb 2023 01:03:18 +0000 Fri, 24 Feb 2023 01:03:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 24 Feb 2023 01:03:18 +0000 Fri, 24 Feb 2023 01:03:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 24 Feb 2023 01:03:18 +0000 Fri, 24 Feb 2023 01:03:18 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.240
Hostname: multinode-858631-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 5c2f15898560409c9b865744cb819409
System UUID: 5c2f1589-8560-409c-9b86-5744cb819409
Boot ID: d01f101c-bcd6-4283-bfb8-f8dddf12cd04
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-c942r 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 48s
kube-system kube-proxy-9rnd6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 48s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 44s kube-proxy
Normal Starting 48s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 48s (x2 over 48s) kubelet Node multinode-858631-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48s (x2 over 48s) kubelet Node multinode-858631-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48s (x2 over 48s) kubelet Node multinode-858631-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 48s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 46s node-controller Node multinode-858631-m03 event: Registered Node multinode-858631-m03 in Controller
Normal NodeReady 34s kubelet Node multinode-858631-m03 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.070223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.956879] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.182341] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.147123] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +5.038236] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.466179] systemd-fstab-generator[546]: Ignoring "noauto" for root device
[ +0.103218] systemd-fstab-generator[557]: Ignoring "noauto" for root device
[ +5.331047] systemd-fstab-generator[735]: Ignoring "noauto" for root device
[ +3.220071] kauditd_printk_skb: 14 callbacks suppressed
[ +0.331990] systemd-fstab-generator[898]: Ignoring "noauto" for root device
[ +0.243658] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.109723] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +0.110176] systemd-fstab-generator[956]: Ignoring "noauto" for root device
[ +1.445023] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
[ +0.105479] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
[ +0.099493] systemd-fstab-generator[1127]: Ignoring "noauto" for root device
[ +0.109382] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
[ +4.843925] systemd-fstab-generator[1387]: Ignoring "noauto" for root device
[ +0.532969] kauditd_printk_skb: 68 callbacks suppressed
[ +11.250819] systemd-fstab-generator[2141]: Ignoring "noauto" for root device
[Feb24 01:01] kauditd_printk_skb: 8 callbacks suppressed
[ +6.570692] kauditd_printk_skb: 12 callbacks suppressed
*
* ==> etcd [520690a3de27] <==
* {"level":"warn","ts":"2023-02-24T01:01:58.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.310Z","time spent":"389.612126ms","remote":"127.0.0.1:43764","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.217\" mod_revision:435 > success:<request_put:<key:\"/registry/masterleases/192.168.39.217\" value_size:67 lease:8213869183733117084 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.217\" > >"}
{"level":"warn","ts":"2023-02-24T01:01:58.700Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"357.645808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
{"level":"info","ts":"2023-02-24T01:01:58.700Z","caller":"traceutil/trace.go:171","msg":"trace[1246393292] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:443; }","duration":"357.819676ms","start":"2023-02-24T01:01:58.342Z","end":"2023-02-24T01:01:58.700Z","steps":["trace[1246393292] 'agreement among raft nodes before linearized reading' (duration: 357.422478ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:01:58.700Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.342Z","time spent":"357.901917ms","remote":"127.0.0.1:43786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"info","ts":"2023-02-24T01:01:58.700Z","caller":"traceutil/trace.go:171","msg":"trace[609908551] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:443; }","duration":"238.555602ms","start":"2023-02-24T01:01:58.461Z","end":"2023-02-24T01:01:58.700Z","steps":["trace[609908551] 'agreement among raft nodes before linearized reading' (duration: 238.221795ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:01:59.245Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"412.806902ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17437241220587892898 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:441 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
{"level":"info","ts":"2023-02-24T01:01:59.245Z","caller":"traceutil/trace.go:171","msg":"trace[1863625836] linearizableReadLoop","detail":"{readStateIndex:468; appliedIndex:467; }","duration":"540.070693ms","start":"2023-02-24T01:01:58.705Z","end":"2023-02-24T01:01:59.245Z","steps":["trace[1863625836] 'read index received' (duration: 127.125993ms)","trace[1863625836] 'applied index is now lower than readState.Index' (duration: 412.944139ms)"],"step_count":2}
{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[617464482] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"540.459827ms","start":"2023-02-24T01:01:58.705Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[617464482] 'process raft request' (duration: 127.325664ms)","trace[617464482] 'compare' (duration: 412.606398ms)"],"step_count":2}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.705Z","time spent":"540.605719ms","remote":"127.0.0.1:43786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:441 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"508.205128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[1904230364] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:444; }","duration":"508.25585ms","start":"2023-02-24T01:01:58.738Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[1904230364] 'agreement among raft nodes before linearized reading' (duration: 508.176727ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.738Z","time spent":"508.289738ms","remote":"127.0.0.1:43774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":31,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true "}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"383.744677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[498940617] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:444; }","duration":"383.762281ms","start":"2023-02-24T01:01:58.862Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[498940617] 'agreement among raft nodes before linearized reading' (duration: 383.730465ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.862Z","time spent":"383.797863ms","remote":"127.0.0.1:43846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"540.837991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:135"}
{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[477372589] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:444; }","duration":"540.855928ms","start":"2023-02-24T01:01:58.705Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[477372589] 'agreement among raft nodes before linearized reading' (duration: 540.785906ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.705Z","time spent":"540.888721ms","remote":"127.0.0.1:43764","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
{"level":"info","ts":"2023-02-24T01:02:57.662Z","caller":"traceutil/trace.go:171","msg":"trace[1926505598] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"106.410423ms","start":"2023-02-24T01:02:57.556Z","end":"2023-02-24T01:02:57.662Z","steps":["trace[1926505598] 'process raft request' (duration: 106.240224ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:02:58.262Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"137.76133ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17437241220587893391 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.217\" mod_revision:571 > success:<request_put:<key:\"/registry/masterleases/192.168.39.217\" value_size:67 lease:8213869183733117581 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.217\" > >>","response":"size:16"}
{"level":"info","ts":"2023-02-24T01:02:58.262Z","caller":"traceutil/trace.go:171","msg":"trace[900094601] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"272.052506ms","start":"2023-02-24T01:02:57.990Z","end":"2023-02-24T01:02:58.262Z","steps":["trace[900094601] 'process raft request' (duration: 133.948282ms)","trace[900094601] 'compare' (duration: 137.467001ms)"],"step_count":2}
{"level":"warn","ts":"2023-02-24T01:02:58.682Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"127.377973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-02-24T01:02:58.683Z","caller":"traceutil/trace.go:171","msg":"trace[1870110251] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:580; }","duration":"127.517896ms","start":"2023-02-24T01:02:58.555Z","end":"2023-02-24T01:02:58.682Z","steps":["trace[1870110251] 'count revisions from in-memory index tree' (duration: 127.125493ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-24T01:02:59.960Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.041703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2023-02-24T01:02:59.960Z","caller":"traceutil/trace.go:171","msg":"trace[1707009120] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:582; }","duration":"133.100368ms","start":"2023-02-24T01:02:59.826Z","end":"2023-02-24T01:02:59.960Z","steps":["trace[1707009120] 'count revisions from in-memory index tree' (duration: 132.8787ms)"],"step_count":1}
*
* ==> kernel <==
* 01:03:52 up 3 min, 0 users, load average: 0.79, 0.34, 0.13
Linux multinode-858631 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [ebafedff9cf6] <==
* I0224 01:03:09.999545 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.240 Flags: [] Table: 0}
I0224 01:03:20.015146 1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
I0224 01:03:20.015263 1 main.go:227] handling current node
I0224 01:03:20.015279 1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
I0224 01:03:20.015285 1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24]
I0224 01:03:20.016402 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0224 01:03:20.016517 1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24]
I0224 01:03:30.022477 1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
I0224 01:03:30.022503 1 main.go:227] handling current node
I0224 01:03:30.022518 1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
I0224 01:03:30.022523 1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24]
I0224 01:03:30.022700 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0224 01:03:30.022707 1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24]
I0224 01:03:40.036081 1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
I0224 01:03:40.036101 1 main.go:227] handling current node
I0224 01:03:40.036124 1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
I0224 01:03:40.036131 1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24]
I0224 01:03:40.036365 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0224 01:03:40.036374 1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24]
I0224 01:03:50.046065 1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
I0224 01:03:50.046135 1 main.go:227] handling current node
I0224 01:03:50.046159 1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
I0224 01:03:50.046169 1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24]
I0224 01:03:50.046417 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0224 01:03:50.046459 1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24]
*
* ==> kube-apiserver [fe09023de51d] <==
* I0224 01:00:57.118105 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0224 01:00:57.122640 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0224 01:00:57.122653 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0224 01:00:57.697510 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0224 01:00:57.737565 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0224 01:00:57.884803 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0224 01:00:57.894325 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
I0224 01:00:57.895479 1 controller.go:615] quota admission added evaluator for: endpoints
I0224 01:00:57.900008 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0224 01:00:58.173957 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0224 01:00:59.585983 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0224 01:00:59.602264 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0224 01:00:59.617070 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0224 01:01:11.580462 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0224 01:01:11.879154 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0224 01:01:58.701378 1 trace.go:219] Trace[368084194]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.217,type:*v1.Endpoints,resource:apiServerIPInfo (24-Feb-2023 01:01:58.181) (total time: 519ms):
Trace[368084194]: ---"Transaction prepared" 127ms (01:01:58.309)
Trace[368084194]: ---"Txn call completed" 391ms (01:01:58.701)
Trace[368084194]: [519.939338ms] [519.939338ms] END
I0224 01:01:59.247137 1 trace.go:219] Trace[990533136]: "Update" accept:application/json, */*,audit-id:7b546f7a-c601-41e1-b10e-1e85ad5601b2,client:192.168.39.217,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (24-Feb-2023 01:01:58.703) (total time: 543ms):
Trace[990533136]: ["GuaranteedUpdate etcd3" audit-id:7b546f7a-c601-41e1-b10e-1e85ad5601b2,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 542ms (01:01:58.704)
Trace[990533136]: ---"Txn call completed" 541ms (01:01:59.246)]
Trace[990533136]: [543.349416ms] [543.349416ms] END
I0224 01:01:59.248065 1 trace.go:219] Trace[97825507]: "List(recursive=true) etcd3" audit-id:,key:/masterleases/,resourceVersion:0,resourceVersionMatch:NotOlderThan,limit:0,continue: (24-Feb-2023 01:01:58.704) (total time: 543ms):
Trace[97825507]: [543.458244ms] [543.458244ms] END
*
* ==> kube-controller-manager [1236d9f62292] <==
* I0224 01:01:11.908557 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cdxbx"
I0224 01:01:12.043584 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-pjnpp"
I0224 01:01:12.054853 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-xhwx9"
I0224 01:01:12.264558 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
I0224 01:01:12.300154 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-pjnpp"
I0224 01:01:26.048745 1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
W0224 01:02:04.183578 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-858631-m02" does not exist
I0224 01:02:04.211788 1 range_allocator.go:372] Set node multinode-858631-m02 PodCIDR to [10.244.1.0/24]
I0224 01:02:04.219106 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wrvgw"
I0224 01:02:04.219332 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhfkf"
W0224 01:02:06.054580 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-858631-m02. Assuming now as a timestamp.
I0224 01:02:06.054696 1 event.go:294] "Event occurred" object="multinode-858631-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-858631-m02 event: Registered Node multinode-858631-m02 in Controller"
W0224 01:02:17.579624 1 topologycache.go:232] Can't get CPU or zone information for multinode-858631-m02 node
I0224 01:02:19.907868 1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
I0224 01:02:19.927765 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-bkl2m"
I0224 01:02:19.945949 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-pmnbg"
I0224 01:02:21.071176 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-bkl2m" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-bkl2m"
W0224 01:03:04.815016 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-858631-m03" does not exist
W0224 01:03:04.816913 1 topologycache.go:232] Can't get CPU or zone information for multinode-858631-m02 node
I0224 01:03:04.833299 1 range_allocator.go:372] Set node multinode-858631-m03 PodCIDR to [10.244.2.0/24]
I0224 01:03:04.843519 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c942r"
I0224 01:03:04.843573 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9rnd6"
W0224 01:03:06.079261 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-858631-m03. Assuming now as a timestamp.
I0224 01:03:06.079626 1 event.go:294] "Event occurred" object="multinode-858631-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-858631-m03 event: Registered Node multinode-858631-m03 in Controller"
W0224 01:03:18.075812 1 topologycache.go:232] Can't get CPU or zone information for multinode-858631-m02 node
*
* ==> kube-proxy [d7b643bcdb88] <==
* I0224 01:01:13.121900 1 node.go:163] Successfully retrieved node IP: 192.168.39.217
I0224 01:01:13.125308 1 server_others.go:109] "Detected node IP" address="192.168.39.217"
I0224 01:01:13.125365 1 server_others.go:535] "Using iptables proxy"
I0224 01:01:13.255848 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0224 01:01:13.255890 1 server_others.go:176] "Using iptables Proxier"
I0224 01:01:13.255924 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0224 01:01:13.256178 1 server.go:655] "Version info" version="v1.26.1"
I0224 01:01:13.256281 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0224 01:01:13.259007 1 config.go:317] "Starting service config controller"
I0224 01:01:13.259018 1 shared_informer.go:273] Waiting for caches to sync for service config
I0224 01:01:13.259042 1 config.go:226] "Starting endpoint slice config controller"
I0224 01:01:13.259045 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0224 01:01:13.263080 1 config.go:444] "Starting node config controller"
I0224 01:01:13.263163 1 shared_informer.go:273] Waiting for caches to sync for node config
I0224 01:01:13.359315 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0224 01:01:13.359358 1 shared_informer.go:280] Caches are synced for service config
I0224 01:01:13.364105 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-scheduler [eaf66574e9cb] <==
* E0224 01:00:56.249644 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0224 01:00:56.249652 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0224 01:00:56.250444 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0224 01:00:56.250457 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0224 01:00:56.250465 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0224 01:00:56.250471 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0224 01:00:56.250477 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0224 01:00:56.250484 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0224 01:00:56.253696 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0224 01:00:56.257289 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0224 01:00:57.085845 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0224 01:00:57.085901 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0224 01:00:57.096856 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0224 01:00:57.096917 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0224 01:00:57.102355 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0224 01:00:57.102374 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0224 01:00:57.130083 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0224 01:00:57.130343 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0224 01:00:57.323364 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0224 01:00:57.323679 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0224 01:00:57.326108 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0224 01:00:57.326151 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0224 01:00:57.360737 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0224 01:00:57.360785 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
I0224 01:01:00.327536 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Fri 2023-02-24 01:00:19 UTC, ends at Fri 2023-02-24 01:03:52 UTC. --
Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955809 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzdl\" (UniqueName: \"kubernetes.io/projected/55b36f8b-ffbe-49b3-99fc-aea074319cd0-kube-api-access-skzdl\") pod \"kindnet-cdxbx\" (UID: \"55b36f8b-ffbe-49b3-99fc-aea074319cd0\") " pod="kube-system/kindnet-cdxbx"
Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955831 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b36f8b-ffbe-49b3-99fc-aea074319cd0-lib-modules\") pod \"kindnet-cdxbx\" (UID: \"55b36f8b-ffbe-49b3-99fc-aea074319cd0\") " pod="kube-system/kindnet-cdxbx"
Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955852 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed1ab279-4267-4c3c-a68d-a729dc29f05b-xtables-lock\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955873 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqb2\" (UniqueName: \"kubernetes.io/projected/ed1ab279-4267-4c3c-a68d-a729dc29f05b-kube-api-access-9fqb2\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955893 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed1ab279-4267-4c3c-a68d-a729dc29f05b-kube-proxy\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955929 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed1ab279-4267-4c3c-a68d-a729dc29f05b-lib-modules\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
Feb 24 01:01:13 multinode-858631 kubelet[2154]: I0224 01:01:13.245619 2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vlrn6" podStartSLOduration=2.24558033 pod.CreationTimestamp="2023-02-24 01:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:13.244406408 +0000 UTC m=+13.689776843" watchObservedRunningTime="2023-02-24 01:01:13.24558033 +0000 UTC m=+13.690950758"
Feb 24 01:01:15 multinode-858631 kubelet[2154]: I0224 01:01:15.906680 2154 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34babfb41e50eb77c4bc905313e993b544b738a460a7a8d0fa527acf39b209b2"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.079980 2154 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.126467 2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cdxbx" podStartSLOduration=-9.223372024728348e+09 pod.CreationTimestamp="2023-02-24 01:01:11 +0000 UTC" firstStartedPulling="2023-02-24 01:01:15.909493931 +0000 UTC m=+16.354864344" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:19.964031269 +0000 UTC m=+20.409401686" watchObservedRunningTime="2023-02-24 01:01:23.126427267 +0000 UTC m=+23.571797700"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.126708 2154 topology_manager.go:210] "Topology Admit Handler"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.129041 2154 topology_manager.go:210] "Topology Admit Handler"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: W0224 01:01:23.138561 2154 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-858631" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-858631' and this object
Feb 24 01:01:23 multinode-858631 kubelet[2154]: E0224 01:01:23.138666 2154 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-858631" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-858631' and this object
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140608 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgqx\" (UniqueName: \"kubernetes.io/projected/7ec578fe-05c4-4916-8db9-67ee112c136f-kube-api-access-tkgqx\") pod \"storage-provisioner\" (UID: \"7ec578fe-05c4-4916-8db9-67ee112c136f\") " pod="kube-system/storage-provisioner"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140721 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d799d4f-0d4b-468e-85ad-052c1735e35c-config-volume\") pod \"coredns-787d4945fb-xhwx9\" (UID: \"9d799d4f-0d4b-468e-85ad-052c1735e35c\") " pod="kube-system/coredns-787d4945fb-xhwx9"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140796 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxjh\" (UniqueName: \"kubernetes.io/projected/9d799d4f-0d4b-468e-85ad-052c1735e35c-kube-api-access-crxjh\") pod \"coredns-787d4945fb-xhwx9\" (UID: \"9d799d4f-0d4b-468e-85ad-052c1735e35c\") " pod="kube-system/coredns-787d4945fb-xhwx9"
Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140869 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7ec578fe-05c4-4916-8db9-67ee112c136f-tmp\") pod \"storage-provisioner\" (UID: \"7ec578fe-05c4-4916-8db9-67ee112c136f\") " pod="kube-system/storage-provisioner"
Feb 24 01:01:24 multinode-858631 kubelet[2154]: E0224 01:01:24.243849 2154 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Feb 24 01:01:24 multinode-858631 kubelet[2154]: E0224 01:01:24.243996 2154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d799d4f-0d4b-468e-85ad-052c1735e35c-config-volume podName:9d799d4f-0d4b-468e-85ad-052c1735e35c nodeName:}" failed. No retries permitted until 2023-02-24 01:01:24.743960629 +0000 UTC m=+25.189331043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d799d4f-0d4b-468e-85ad-052c1735e35c-config-volume") pod "coredns-787d4945fb-xhwx9" (UID: "9d799d4f-0d4b-468e-85ad-052c1735e35c") : failed to sync configmap cache: timed out waiting for the condition
Feb 24 01:01:25 multinode-858631 kubelet[2154]: I0224 01:01:25.012411 2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.012379467 pod.CreationTimestamp="2023-02-24 01:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:25.012115165 +0000 UTC m=+25.457485598" watchObservedRunningTime="2023-02-24 01:01:25.012379467 +0000 UTC m=+25.457749899"
Feb 24 01:01:25 multinode-858631 kubelet[2154]: I0224 01:01:25.496245 2154 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cccc3389964c69e4d80432441960c0308efe13b6105bc10ffbb73ab2cb03ef"
Feb 24 01:01:26 multinode-858631 kubelet[2154]: I0224 01:01:26.531245 2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xhwx9" podStartSLOduration=14.531056449 pod.CreationTimestamp="2023-02-24 01:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:26.526178496 +0000 UTC m=+26.971548929" watchObservedRunningTime="2023-02-24 01:01:26.531056449 +0000 UTC m=+26.976426902"
Feb 24 01:02:19 multinode-858631 kubelet[2154]: I0224 01:02:19.978403 2154 topology_manager.go:210] "Topology Admit Handler"
Feb 24 01:02:20 multinode-858631 kubelet[2154]: I0224 01:02:20.018651 2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc6s\" (UniqueName: \"kubernetes.io/projected/f4b83e91-f308-405b-be73-10f422c3af35-kube-api-access-5gc6s\") pod \"busybox-6b86dd6d48-pmnbg\" (UID: \"f4b83e91-f308-405b-be73-10f422c3af35\") " pod="default/busybox-6b86dd6d48-pmnbg"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-858631 -n multinode-858631
helpers_test.go:261: (dbg) Run: kubectl --context multinode-858631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (20.80s)