=== RUN TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 node start m02 -v=7 --alsologtostderr
E0804 00:38:01.833159 11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:420: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 node start m02 -v=7 --alsologtostderr: exit status 90 (1m18.086881165s)
-- stdout --
* Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
* Restarting existing kvm2 VM for "ha-230158-m02" ...
-- /stdout --
** stderr **
I0804 00:37:58.331977 25510 out.go:291] Setting OutFile to fd 1 ...
I0804 00:37:58.332120 25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:37:58.332130 25510 out.go:304] Setting ErrFile to fd 2...
I0804 00:37:58.332141 25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:37:58.332317 25510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:37:58.332555 25510 mustload.go:65] Loading cluster: ha-230158
I0804 00:37:58.332888 25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:37:58.333279 25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:37:58.333322 25510 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:37:58.348801 25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
I0804 00:37:58.349217 25510 main.go:141] libmachine: () Calling .GetVersion
I0804 00:37:58.349800 25510 main.go:141] libmachine: Using API Version 1
I0804 00:37:58.349821 25510 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:37:58.350182 25510 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:37:58.350406 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
W0804 00:37:58.352024 25510 host.go:58] "ha-230158-m02" host status: Stopped
I0804 00:37:58.354076 25510 out.go:177] * Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
I0804 00:37:58.355485 25510 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:37:58.355536 25510 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0804 00:37:58.355553 25510 cache.go:56] Caching tarball of preloaded images
I0804 00:37:58.355653 25510 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0804 00:37:58.355665 25510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0804 00:37:58.355778 25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:37:58.355958 25510 start.go:360] acquireMachinesLock for ha-230158-m02: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0804 00:37:58.356011 25510 start.go:364] duration metric: took 24.45µs to acquireMachinesLock for "ha-230158-m02"
I0804 00:37:58.356028 25510 start.go:96] Skipping create...Using existing machine configuration
I0804 00:37:58.356038 25510 fix.go:54] fixHost starting: m02
I0804 00:37:58.356354 25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:37:58.356386 25510 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:37:58.371434 25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
I0804 00:37:58.371809 25510 main.go:141] libmachine: () Calling .GetVersion
I0804 00:37:58.372299 25510 main.go:141] libmachine: Using API Version 1
I0804 00:37:58.372319 25510 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:37:58.372716 25510 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:37:58.372896 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:37:58.373043 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:37:58.374382 25510 fix.go:112] recreateIfNeeded on ha-230158-m02: state=Stopped err=<nil>
I0804 00:37:58.374406 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
W0804 00:37:58.374556 25510 fix.go:138] unexpected machine state, will restart: <nil>
I0804 00:37:58.376389 25510 out.go:177] * Restarting existing kvm2 VM for "ha-230158-m02" ...
I0804 00:37:58.377504 25510 main.go:141] libmachine: (ha-230158-m02) Calling .Start
I0804 00:37:58.377660 25510 main.go:141] libmachine: (ha-230158-m02) Ensuring networks are active...
I0804 00:37:58.378211 25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network default is active
I0804 00:37:58.378552 25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network mk-ha-230158 is active
I0804 00:37:58.378934 25510 main.go:141] libmachine: (ha-230158-m02) Getting domain xml...
I0804 00:37:58.379491 25510 main.go:141] libmachine: (ha-230158-m02) Creating domain...
I0804 00:37:59.624645 25510 main.go:141] libmachine: (ha-230158-m02) Waiting to get IP...
I0804 00:37:59.625595 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.626042 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has current primary IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.626081 25510 main.go:141] libmachine: (ha-230158-m02) Found IP for machine: 192.168.39.188
I0804 00:37:59.626095 25510 main.go:141] libmachine: (ha-230158-m02) Reserving static IP address...
I0804 00:37:59.626602 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:37:59.626628 25510 main.go:141] libmachine: (ha-230158-m02) Reserved static IP address: 192.168.39.188
I0804 00:37:59.626649 25510 main.go:141] libmachine: (ha-230158-m02) DBG | skip adding static IP to network mk-ha-230158 - found existing host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"}
I0804 00:37:59.626667 25510 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
I0804 00:37:59.626679 25510 main.go:141] libmachine: (ha-230158-m02) Waiting for SSH to be available...
I0804 00:37:59.628881 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.629315 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:37:59.629341 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.629473 25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
I0804 00:37:59.629501 25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
I0804 00:37:59.629576 25510 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:37:59.629606 25510 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
I0804 00:37:59.629620 25510 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
I0804 00:38:10.770721 25510 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: <nil>:
I0804 00:38:10.771116 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
I0804 00:38:10.771802 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:10.774556 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.775061 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.775093 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.775352 25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:38:10.775590 25510 machine.go:94] provisionDockerMachine start ...
I0804 00:38:10.775613 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:10.775828 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:10.778196 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.778563 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.778587 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.778743 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:10.778896 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.779103 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.779249 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:10.779397 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:10.779583 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:10.779595 25510 main.go:141] libmachine: About to run SSH command:
hostname
I0804 00:38:10.894581 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0804 00:38:10.894614 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:10.894814 25510 buildroot.go:166] provisioning hostname "ha-230158-m02"
I0804 00:38:10.894837 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:10.895004 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:10.897476 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.897844 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.897881 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.897983 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:10.898155 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.898354 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.898508 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:10.898677 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:10.898885 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:10.898903 25510 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-230158-m02 && echo "ha-230158-m02" | sudo tee /etc/hostname
I0804 00:38:11.026137 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m02
I0804 00:38:11.026164 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.029047 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.029537 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.029569 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.029738 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.029932 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.030104 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.030262 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.030442 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.030614 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.030630 25510 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-230158-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-230158-m02' | sudo tee -a /etc/hosts;
fi
fi
I0804 00:38:11.156086 25510 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:38:11.156110 25510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
I0804 00:38:11.156138 25510 buildroot.go:174] setting up certificates
I0804 00:38:11.156146 25510 provision.go:84] configureAuth start
I0804 00:38:11.156154 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:11.156432 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:11.159121 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.159564 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.159594 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.159837 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.162124 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.162514 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.162546 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.162670 25510 provision.go:143] copyHostCerts
I0804 00:38:11.162702 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:38:11.162745 25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
I0804 00:38:11.162757 25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:38:11.162841 25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
I0804 00:38:11.162933 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:38:11.162957 25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
I0804 00:38:11.162964 25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:38:11.163001 25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
I0804 00:38:11.163058 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:38:11.163087 25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
I0804 00:38:11.163096 25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:38:11.163133 25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
I0804 00:38:11.163210 25510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m02 san=[127.0.0.1 192.168.39.188 ha-230158-m02 localhost minikube]
I0804 00:38:11.457749 25510 provision.go:177] copyRemoteCerts
I0804 00:38:11.457804 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0804 00:38:11.457831 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.460834 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.461178 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.461216 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.461413 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.461642 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.461792 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.462029 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:11.552505 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0804 00:38:11.552571 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0804 00:38:11.577189 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
I0804 00:38:11.577290 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0804 00:38:11.602036 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0804 00:38:11.602101 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0804 00:38:11.625880 25510 provision.go:87] duration metric: took 469.715717ms to configureAuth
I0804 00:38:11.625907 25510 buildroot.go:189] setting minikube options for container-runtime
I0804 00:38:11.626132 25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:38:11.626154 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:11.626421 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.629200 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.629715 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.629742 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.629913 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.630078 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.630223 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.630379 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.630558 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.630716 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.630727 25510 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0804 00:38:11.748109 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0804 00:38:11.748129 25510 buildroot.go:70] root file system type: tmpfs
I0804 00:38:11.748260 25510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0804 00:38:11.748288 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.751057 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.751421 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.751455 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.751712 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.751977 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.752136 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.752311 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.752476 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.752674 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.752768 25510 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0804 00:38:11.885782 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0804 00:38:11.885830 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.888701 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.889069 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.889098 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.889241 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.889427 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.889701 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.889860 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.890052 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.890250 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.890274 25510 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0804 00:38:13.843420 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0804 00:38:13.843461 25510 machine.go:97] duration metric: took 3.067856975s to provisionDockerMachine
I0804 00:38:13.843473 25510 start.go:293] postStartSetup for "ha-230158-m02" (driver="kvm2")
I0804 00:38:13.843482 25510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0804 00:38:13.843498 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:13.843800 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0804 00:38:13.843831 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:13.846779 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.847277 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:13.847305 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.847479 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:13.847712 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:13.847892 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:13.848015 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:13.937619 25510 ssh_runner.go:195] Run: cat /etc/os-release
I0804 00:38:13.941892 25510 info.go:137] Remote host: Buildroot 2023.02.9
I0804 00:38:13.941913 25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
I0804 00:38:13.941999 25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
I0804 00:38:13.942104 25510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
I0804 00:38:13.942117 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
I0804 00:38:13.942261 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0804 00:38:13.952175 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:38:13.976761 25510 start.go:296] duration metric: took 133.275449ms for postStartSetup
I0804 00:38:13.976800 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:13.977069 25510 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0804 00:38:13.977090 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:13.980173 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.980544 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:13.980596 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.980800 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:13.981072 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:13.981269 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:13.981412 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.071182 25510 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0804 00:38:14.071254 25510 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0804 00:38:14.130544 25510 fix.go:56] duration metric: took 15.774500667s for fixHost
I0804 00:38:14.130591 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.133406 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.133762 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.133788 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.133983 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.134181 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.134372 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.134501 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.134694 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:14.134887 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:14.134901 25510 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0804 00:38:14.255857 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731894.223692124
I0804 00:38:14.255885 25510 fix.go:216] guest clock: 1722731894.223692124
I0804 00:38:14.255908 25510 fix.go:229] Guest: 2024-08-04 00:38:14.223692124 +0000 UTC Remote: 2024-08-04 00:38:14.130571736 +0000 UTC m=+15.831243026 (delta=93.120388ms)
I0804 00:38:14.255935 25510 fix.go:200] guest clock delta is within tolerance: 93.120388ms
I0804 00:38:14.255944 25510 start.go:83] releasing machines lock for "ha-230158-m02", held for 15.899924306s
I0804 00:38:14.255973 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.256217 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:14.258949 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.259352 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.259371 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.259571 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260000 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260224 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260339 25510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0804 00:38:14.260409 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.260481 25510 ssh_runner.go:195] Run: systemctl --version
I0804 00:38:14.260503 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.263324 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263556 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263723 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.263748 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263884 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.264008 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.264031 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.264072 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.264152 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.264243 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.264326 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.264387 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.264474 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.264609 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.371161 25510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0804 00:38:14.376988 25510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0804 00:38:14.377057 25510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0804 00:38:14.397803 25510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0804 00:38:14.397829 25510 start.go:495] detecting cgroup driver to use...
I0804 00:38:14.397967 25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:38:14.420340 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0804 00:38:14.432632 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0804 00:38:14.444438 25510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0804 00:38:14.444485 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0804 00:38:14.455993 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:38:14.468484 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0804 00:38:14.480157 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:38:14.492396 25510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0804 00:38:14.503333 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0804 00:38:14.513683 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0804 00:38:14.524306 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0804 00:38:14.534845 25510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0804 00:38:14.546058 25510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0804 00:38:14.556163 25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:38:14.675840 25510 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0804 00:38:14.702706 25510 start.go:495] detecting cgroup driver to use...
I0804 00:38:14.702806 25510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0804 00:38:14.725870 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:38:14.744691 25510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0804 00:38:14.775797 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:38:14.789716 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:38:14.802691 25510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0804 00:38:14.826208 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:38:14.839810 25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:38:14.859860 25510 ssh_runner.go:195] Run: which cri-dockerd
I0804 00:38:14.864004 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0804 00:38:14.873703 25510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0804 00:38:14.891236 25510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0804 00:38:15.012240 25510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0804 00:38:15.137153 25510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0804 00:38:15.137313 25510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0804 00:38:15.155559 25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:38:15.276327 25510 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:39:16.351082 25510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.074721712s)
I0804 00:39:16.351156 25510 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0804 00:39:16.372451 25510 out.go:177]
W0804 00:39:16.373746 25510 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0804 00:39:16.373803 25510 out.go:239] *
*
W0804 00:39:16.376664 25510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0804 00:39:16.378346 25510 out.go:177]
** /stderr **
ha_test.go:422: I0804 00:37:58.331977 25510 out.go:291] Setting OutFile to fd 1 ...
I0804 00:37:58.332120 25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:37:58.332130 25510 out.go:304] Setting ErrFile to fd 2...
I0804 00:37:58.332141 25510 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:37:58.332317 25510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:37:58.332555 25510 mustload.go:65] Loading cluster: ha-230158
I0804 00:37:58.332888 25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:37:58.333279 25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:37:58.333322 25510 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:37:58.348801 25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
I0804 00:37:58.349217 25510 main.go:141] libmachine: () Calling .GetVersion
I0804 00:37:58.349800 25510 main.go:141] libmachine: Using API Version 1
I0804 00:37:58.349821 25510 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:37:58.350182 25510 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:37:58.350406 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
W0804 00:37:58.352024 25510 host.go:58] "ha-230158-m02" host status: Stopped
I0804 00:37:58.354076 25510 out.go:177] * Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
I0804 00:37:58.355485 25510 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:37:58.355536 25510 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0804 00:37:58.355553 25510 cache.go:56] Caching tarball of preloaded images
I0804 00:37:58.355653 25510 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0804 00:37:58.355665 25510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0804 00:37:58.355778 25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:37:58.355958 25510 start.go:360] acquireMachinesLock for ha-230158-m02: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0804 00:37:58.356011 25510 start.go:364] duration metric: took 24.45µs to acquireMachinesLock for "ha-230158-m02"
I0804 00:37:58.356028 25510 start.go:96] Skipping create...Using existing machine configuration
I0804 00:37:58.356038 25510 fix.go:54] fixHost starting: m02
I0804 00:37:58.356354 25510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:37:58.356386 25510 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:37:58.371434 25510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
I0804 00:37:58.371809 25510 main.go:141] libmachine: () Calling .GetVersion
I0804 00:37:58.372299 25510 main.go:141] libmachine: Using API Version 1
I0804 00:37:58.372319 25510 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:37:58.372716 25510 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:37:58.372896 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:37:58.373043 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:37:58.374382 25510 fix.go:112] recreateIfNeeded on ha-230158-m02: state=Stopped err=<nil>
I0804 00:37:58.374406 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
W0804 00:37:58.374556 25510 fix.go:138] unexpected machine state, will restart: <nil>
I0804 00:37:58.376389 25510 out.go:177] * Restarting existing kvm2 VM for "ha-230158-m02" ...
I0804 00:37:58.377504 25510 main.go:141] libmachine: (ha-230158-m02) Calling .Start
I0804 00:37:58.377660 25510 main.go:141] libmachine: (ha-230158-m02) Ensuring networks are active...
I0804 00:37:58.378211 25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network default is active
I0804 00:37:58.378552 25510 main.go:141] libmachine: (ha-230158-m02) Ensuring network mk-ha-230158 is active
I0804 00:37:58.378934 25510 main.go:141] libmachine: (ha-230158-m02) Getting domain xml...
I0804 00:37:58.379491 25510 main.go:141] libmachine: (ha-230158-m02) Creating domain...
I0804 00:37:59.624645 25510 main.go:141] libmachine: (ha-230158-m02) Waiting to get IP...
I0804 00:37:59.625595 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.626042 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has current primary IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.626081 25510 main.go:141] libmachine: (ha-230158-m02) Found IP for machine: 192.168.39.188
I0804 00:37:59.626095 25510 main.go:141] libmachine: (ha-230158-m02) Reserving static IP address...
I0804 00:37:59.626602 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:37:59.626628 25510 main.go:141] libmachine: (ha-230158-m02) Reserved static IP address: 192.168.39.188
I0804 00:37:59.626649 25510 main.go:141] libmachine: (ha-230158-m02) DBG | skip adding static IP to network mk-ha-230158 - found existing host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"}
I0804 00:37:59.626667 25510 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
I0804 00:37:59.626679 25510 main.go:141] libmachine: (ha-230158-m02) Waiting for SSH to be available...
I0804 00:37:59.628881 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.629315 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:37:59.629341 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:37:59.629473 25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
I0804 00:37:59.629501 25510 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
I0804 00:37:59.629576 25510 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:37:59.629606 25510 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
I0804 00:37:59.629620 25510 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
I0804 00:38:10.770721 25510 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: <nil>:
I0804 00:38:10.771116 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
I0804 00:38:10.771802 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:10.774556 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.775061 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.775093 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.775352 25510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:38:10.775590 25510 machine.go:94] provisionDockerMachine start ...
I0804 00:38:10.775613 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:10.775828 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:10.778196 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.778563 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.778587 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.778743 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:10.778896 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.779103 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.779249 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:10.779397 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:10.779583 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:10.779595 25510 main.go:141] libmachine: About to run SSH command:
hostname
I0804 00:38:10.894581 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0804 00:38:10.894614 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:10.894814 25510 buildroot.go:166] provisioning hostname "ha-230158-m02"
I0804 00:38:10.894837 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:10.895004 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:10.897476 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.897844 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:10.897881 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:10.897983 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:10.898155 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.898354 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:10.898508 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:10.898677 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:10.898885 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:10.898903 25510 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-230158-m02 && echo "ha-230158-m02" | sudo tee /etc/hostname
I0804 00:38:11.026137 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m02
I0804 00:38:11.026164 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.029047 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.029537 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.029569 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.029738 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.029932 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.030104 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.030262 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.030442 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.030614 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.030630 25510 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-230158-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-230158-m02' | sudo tee -a /etc/hosts;
fi
fi
I0804 00:38:11.156086 25510 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:38:11.156110 25510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
I0804 00:38:11.156138 25510 buildroot.go:174] setting up certificates
I0804 00:38:11.156146 25510 provision.go:84] configureAuth start
I0804 00:38:11.156154 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:38:11.156432 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:11.159121 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.159564 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.159594 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.159837 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.162124 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.162514 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.162546 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.162670 25510 provision.go:143] copyHostCerts
I0804 00:38:11.162702 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:38:11.162745 25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
I0804 00:38:11.162757 25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:38:11.162841 25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
I0804 00:38:11.162933 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:38:11.162957 25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
I0804 00:38:11.162964 25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:38:11.163001 25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
I0804 00:38:11.163058 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:38:11.163087 25510 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
I0804 00:38:11.163096 25510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:38:11.163133 25510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
I0804 00:38:11.163210 25510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m02 san=[127.0.0.1 192.168.39.188 ha-230158-m02 localhost minikube]
I0804 00:38:11.457749 25510 provision.go:177] copyRemoteCerts
I0804 00:38:11.457804 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0804 00:38:11.457831 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.460834 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.461178 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.461216 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.461413 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.461642 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.461792 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.462029 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:11.552505 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0804 00:38:11.552571 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0804 00:38:11.577189 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
I0804 00:38:11.577290 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0804 00:38:11.602036 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0804 00:38:11.602101 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0804 00:38:11.625880 25510 provision.go:87] duration metric: took 469.715717ms to configureAuth
I0804 00:38:11.625907 25510 buildroot.go:189] setting minikube options for container-runtime
I0804 00:38:11.626132 25510 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:38:11.626154 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:11.626421 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.629200 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.629715 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.629742 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.629913 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.630078 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.630223 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.630379 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.630558 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.630716 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.630727 25510 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0804 00:38:11.748109 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0804 00:38:11.748129 25510 buildroot.go:70] root file system type: tmpfs
I0804 00:38:11.748260 25510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0804 00:38:11.748288 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.751057 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.751421 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.751455 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.751712 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.751977 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.752136 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.752311 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.752476 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.752674 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.752768 25510 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0804 00:38:11.885782 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0804 00:38:11.885830 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:11.888701 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.889069 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:11.889098 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:11.889241 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:11.889427 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.889701 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:11.889860 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:11.890052 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:11.890250 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:11.890274 25510 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0804 00:38:13.843420 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0804 00:38:13.843461 25510 machine.go:97] duration metric: took 3.067856975s to provisionDockerMachine
I0804 00:38:13.843473 25510 start.go:293] postStartSetup for "ha-230158-m02" (driver="kvm2")
I0804 00:38:13.843482 25510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0804 00:38:13.843498 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:13.843800 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0804 00:38:13.843831 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:13.846779 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.847277 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:13.847305 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.847479 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:13.847712 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:13.847892 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:13.848015 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:13.937619 25510 ssh_runner.go:195] Run: cat /etc/os-release
I0804 00:38:13.941892 25510 info.go:137] Remote host: Buildroot 2023.02.9
I0804 00:38:13.941913 25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
I0804 00:38:13.941999 25510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
I0804 00:38:13.942104 25510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
I0804 00:38:13.942117 25510 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
I0804 00:38:13.942261 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0804 00:38:13.952175 25510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:38:13.976761 25510 start.go:296] duration metric: took 133.275449ms for postStartSetup
I0804 00:38:13.976800 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:13.977069 25510 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0804 00:38:13.977090 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:13.980173 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.980544 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:13.980596 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:13.980800 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:13.981072 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:13.981269 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:13.981412 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.071182 25510 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0804 00:38:14.071254 25510 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0804 00:38:14.130544 25510 fix.go:56] duration metric: took 15.774500667s for fixHost
I0804 00:38:14.130591 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.133406 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.133762 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.133788 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.133983 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.134181 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.134372 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.134501 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.134694 25510 main.go:141] libmachine: Using SSH client type: native
I0804 00:38:14.134887 25510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:38:14.134901 25510 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0804 00:38:14.255857 25510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731894.223692124
I0804 00:38:14.255885 25510 fix.go:216] guest clock: 1722731894.223692124
I0804 00:38:14.255908 25510 fix.go:229] Guest: 2024-08-04 00:38:14.223692124 +0000 UTC Remote: 2024-08-04 00:38:14.130571736 +0000 UTC m=+15.831243026 (delta=93.120388ms)
I0804 00:38:14.255935 25510 fix.go:200] guest clock delta is within tolerance: 93.120388ms
I0804 00:38:14.255944 25510 start.go:83] releasing machines lock for "ha-230158-m02", held for 15.899924306s
I0804 00:38:14.255973 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.256217 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:38:14.258949 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.259352 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.259371 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.259571 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260000 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260224 25510 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:38:14.260339 25510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0804 00:38:14.260409 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.260481 25510 ssh_runner.go:195] Run: systemctl --version
I0804 00:38:14.260503 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:38:14.263324 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263556 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263723 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.263748 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.263884 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.264008 25510 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:38:14.264031 25510 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:38:14.264072 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.264152 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:38:14.264243 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.264326 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:38:14.264387 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.264474 25510 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:38:14.264609 25510 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:38:14.371161 25510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0804 00:38:14.376988 25510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0804 00:38:14.377057 25510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0804 00:38:14.397803 25510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0804 00:38:14.397829 25510 start.go:495] detecting cgroup driver to use...
I0804 00:38:14.397967 25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:38:14.420340 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0804 00:38:14.432632 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0804 00:38:14.444438 25510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0804 00:38:14.444485 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0804 00:38:14.455993 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:38:14.468484 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0804 00:38:14.480157 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:38:14.492396 25510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0804 00:38:14.503333 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0804 00:38:14.513683 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0804 00:38:14.524306 25510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0804 00:38:14.534845 25510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0804 00:38:14.546058 25510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0804 00:38:14.556163 25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:38:14.675840 25510 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0804 00:38:14.702706 25510 start.go:495] detecting cgroup driver to use...
I0804 00:38:14.702806 25510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0804 00:38:14.725870 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:38:14.744691 25510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0804 00:38:14.775797 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:38:14.789716 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:38:14.802691 25510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0804 00:38:14.826208 25510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:38:14.839810 25510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:38:14.859860 25510 ssh_runner.go:195] Run: which cri-dockerd
I0804 00:38:14.864004 25510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0804 00:38:14.873703 25510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0804 00:38:14.891236 25510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0804 00:38:15.012240 25510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0804 00:38:15.137153 25510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0804 00:38:15.137313 25510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0804 00:38:15.155559 25510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:38:15.276327 25510 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:39:16.351082 25510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.074721712s)
I0804 00:39:16.351156 25510 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0804 00:39:16.372451 25510 out.go:177]
W0804 00:39:16.373746 25510 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 04 00:38:12 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.326177741Z" level=info msg="Starting up"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.327119521Z" level=info msg="containerd not running, starting managed containerd"
Aug 04 00:38:12 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:12.328077611Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.357083625Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380119843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380244399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380327326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380365537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380659854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380746850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380936636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.380980166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381089469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381129276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381357657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.381722077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.383943023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384068421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384246838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384299443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384545997Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.384617831Z" level=info msg="metadata content store policy set" policy=shared
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388127474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388219544Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388276421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388319410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388361671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388455180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388694738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388804208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388845843Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388892231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388935349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.388976334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389099850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389142923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389183640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389240347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389279107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389315090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389370248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389408112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389451331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389494375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389530635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389577103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389658338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389704850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389746329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389781917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389817387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389854329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389893335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389945127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.389981949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390070588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390151066Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390196084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390231931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390268726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390302779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390339825Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390382329Z" level=info msg="NRI interface is disabled by configuration."
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390645097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390719485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390779483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 04 00:38:12 ha-230158-m02 dockerd[495]: time="2024-08-04T00:38:12.390823688Z" level=info msg="containerd successfully booted in 0.035317s"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.355694047Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.417198292Z" level=info msg="Loading containers: start."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.603908628Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.697697573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.760132523Z" level=info msg="Loading containers: done."
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.774708591Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.775080161Z" level=info msg="Daemon has completed initialization"
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809171865Z" level=info msg="API listen on /var/run/docker.sock"
Aug 04 00:38:13 ha-230158-m02 systemd[1]: Started Docker Application Container Engine.
Aug 04 00:38:13 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:13.809357764Z" level=info msg="API listen on [::]:2376"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.262432246Z" level=info msg="Processing signal 'terminated'"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264385339Z" level=info msg="Daemon shutdown complete"
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264545438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.264639728Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 04 00:38:15 ha-230158-m02 dockerd[488]: time="2024-08-04T00:38:15.265397657Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Aug 04 00:38:15 ha-230158-m02 systemd[1]: Stopping Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 systemd[1]: docker.service: Deactivated successfully.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Stopped Docker Application Container Engine.
Aug 04 00:38:16 ha-230158-m02 systemd[1]: Starting Docker Application Container Engine...
Aug 04 00:38:16 ha-230158-m02 dockerd[1098]: time="2024-08-04T00:38:16.310736920Z" level=info msg="Starting up"
Aug 04 00:39:16 ha-230158-m02 dockerd[1098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:39:16 ha-230158-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 04 00:39:16 ha-230158-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0804 00:39:16.373803 25510 out.go:239] *
*
W0804 00:39:16.376664 25510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0804 00:39:16.378346 25510 out.go:177]
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-230158 node start m02 -v=7 --alsologtostderr": exit status 90
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (781.074493ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:16.446979 25881 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:16.447079 25881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:16.447084 25881 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:16.447088 25881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:16.447276 25881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:16.447427 25881 out.go:298] Setting JSON to false
I0804 00:39:16.447445 25881 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:16.447491 25881 notify.go:220] Checking for updates...
I0804 00:39:16.447784 25881 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:16.447796 25881 status.go:255] checking status of ha-230158 ...
I0804 00:39:16.448152 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.448219 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.468500 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34607
I0804 00:39:16.468919 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.469528 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.469555 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.469938 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.470298 25881 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:16.471874 25881 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:16.471887 25881 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:16.472191 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.472230 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.487173 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
I0804 00:39:16.487542 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.487933 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.487959 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.488272 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.488445 25881 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:16.491622 25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:16.492137 25881 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:16.492161 25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:16.492241 25881 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:16.492553 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.492592 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.508012 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
I0804 00:39:16.508448 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.508941 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.508965 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.509300 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.509492 25881 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:16.509698 25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:16.509729 25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:16.512943 25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:16.513430 25881 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:16.513483 25881 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:16.513619 25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:16.513797 25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:16.513929 25881 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:16.514104 25881 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:16.603638 25881 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:16.611500 25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:16.628752 25881 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:16.628773 25881 api_server.go:166] Checking apiserver status ...
I0804 00:39:16.628801 25881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:16.643175 25881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:16.652817 25881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:16.652856 25881 ssh_runner.go:195] Run: ls
I0804 00:39:16.658780 25881 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:16.665624 25881 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:16.665647 25881 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:16.665659 25881 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:16.665691 25881 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:16.666098 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.666148 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.680590 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
I0804 00:39:16.680985 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.681474 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.681491 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.681777 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.681934 25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:16.683714 25881 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:16.683732 25881 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:16.683996 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.684026 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.699143 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37313
I0804 00:39:16.699452 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.699868 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.699891 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.700160 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.700346 25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:16.702734 25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:16.703088 25881 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:16.703117 25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:16.703262 25881 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:16.703571 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.703616 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.718026 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
I0804 00:39:16.718394 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.718848 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.718870 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.719188 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.719393 25881 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:16.719606 25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:16.719626 25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:16.722480 25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:16.722887 25881 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:16.722919 25881 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:16.723104 25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:16.723301 25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:16.723454 25881 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:16.723609 25881 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:16.810346 25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:16.825253 25881 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:16.825281 25881 api_server.go:166] Checking apiserver status ...
I0804 00:39:16.825313 25881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:16.838609 25881 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:16.838634 25881 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:16.838645 25881 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:16.838664 25881 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:16.838975 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.839022 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.854182 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
I0804 00:39:16.854754 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.855289 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.855321 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.855724 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.855957 25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:16.857297 25881 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:16.857310 25881 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:16.857676 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.857712 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.874346 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33937
I0804 00:39:16.874810 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.875294 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.875318 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.875628 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.875779 25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:16.878442 25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:16.878952 25881 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:16.878974 25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:16.879136 25881 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:16.879456 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:16.879491 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:16.893139 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
I0804 00:39:16.893546 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:16.894036 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:16.894053 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:16.894322 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:16.894489 25881 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:16.894645 25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:16.894675 25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:16.897190 25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:16.897605 25881 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:16.897632 25881 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:16.897750 25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:16.897879 25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:16.898027 25881 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:16.898194 25881 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:16.978345 25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:16.995749 25881 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:16.995778 25881 api_server.go:166] Checking apiserver status ...
I0804 00:39:16.995815 25881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:17.010926 25881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:17.020984 25881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:17.021031 25881 ssh_runner.go:195] Run: ls
I0804 00:39:17.025361 25881 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:17.029695 25881 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:17.029718 25881 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:17.029729 25881 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:17.029747 25881 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:17.030118 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:17.030154 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:17.044802 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
I0804 00:39:17.045168 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:17.045630 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:17.045659 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:17.045976 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:17.046258 25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:17.047772 25881 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:17.047789 25881 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:17.048051 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:17.048079 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:17.061807 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
I0804 00:39:17.062197 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:17.062732 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:17.062766 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:17.063059 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:17.063257 25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:17.066106 25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:17.066536 25881 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:17.066560 25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:17.066695 25881 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:17.066973 25881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:17.067002 25881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:17.080507 25881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
I0804 00:39:17.080875 25881 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:17.081289 25881 main.go:141] libmachine: Using API Version 1
I0804 00:39:17.081307 25881 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:17.081568 25881 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:17.081725 25881 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:17.081874 25881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:17.081901 25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:17.084272 25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:17.084606 25881 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:17.084630 25881 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:17.084769 25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:17.084936 25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:17.085083 25881 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:17.085210 25881 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:17.166318 25881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:17.182795 25881 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (760.917148ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:18.188252 25967 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:18.188365 25967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:18.188375 25967 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:18.188382 25967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:18.188646 25967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:18.188890 25967 out.go:298] Setting JSON to false
I0804 00:39:18.188920 25967 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:18.189040 25967 notify.go:220] Checking for updates...
I0804 00:39:18.189448 25967 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:18.189470 25967 status.go:255] checking status of ha-230158 ...
I0804 00:39:18.190066 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.190115 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.209263 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
I0804 00:39:18.209647 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.210275 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.210304 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.210658 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.210882 25967 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:18.212547 25967 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:18.212563 25967 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:18.212882 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.212930 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.228301 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35289
I0804 00:39:18.228675 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.229159 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.229184 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.229553 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.229754 25967 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:18.232553 25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:18.233020 25967 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:18.233053 25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:18.233150 25967 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:18.233451 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.233510 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.249422 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
I0804 00:39:18.249854 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.250260 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.250283 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.250629 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.250793 25967 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:18.251014 25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:18.251044 25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:18.253831 25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:18.254271 25967 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:18.254307 25967 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:18.254398 25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:18.254557 25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:18.254701 25967 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:18.254822 25967 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:18.337965 25967 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:18.346606 25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:18.363492 25967 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:18.363519 25967 api_server.go:166] Checking apiserver status ...
I0804 00:39:18.363557 25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:18.377960 25967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:18.388019 25967 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:18.388064 25967 ssh_runner.go:195] Run: ls
I0804 00:39:18.392471 25967 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:18.399721 25967 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:18.399739 25967 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:18.399749 25967 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:18.399770 25967 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:18.400150 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.400190 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.415591 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
I0804 00:39:18.415950 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.416423 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.416438 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.416734 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.416900 25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:18.418596 25967 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:18.418615 25967 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:18.418890 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.418921 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.433121 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44637
I0804 00:39:18.433544 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.433926 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.433950 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.434311 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.434518 25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:18.437210 25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:18.437714 25967 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:18.437752 25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:18.437812 25967 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:18.438099 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.438130 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.452557 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39741
I0804 00:39:18.452973 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.453492 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.453513 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.453785 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.453969 25967 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:18.454140 25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:18.454162 25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:18.456937 25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:18.457323 25967 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:18.457349 25967 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:18.457478 25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:18.457623 25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:18.457772 25967 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:18.457947 25967 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:18.541483 25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:18.557391 25967 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:18.557416 25967 api_server.go:166] Checking apiserver status ...
I0804 00:39:18.557462 25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:18.569932 25967 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:18.569965 25967 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:18.569977 25967 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:18.570006 25967 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:18.570400 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.570440 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.585174 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
I0804 00:39:18.585573 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.586012 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.586032 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.586385 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.586578 25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:18.588082 25967 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:18.588095 25967 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:18.588359 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.588386 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.603130 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
I0804 00:39:18.603535 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.603993 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.604016 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.604355 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.604544 25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:18.607076 25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:18.607445 25967 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:18.607481 25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:18.607599 25967 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:18.607873 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.607902 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.621737 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
I0804 00:39:18.622113 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.622558 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.622579 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.622937 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.623090 25967 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:18.623313 25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:18.623340 25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:18.626310 25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:18.626805 25967 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:18.626831 25967 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:18.626966 25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:18.627142 25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:18.627355 25967 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:18.627520 25967 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:18.705732 25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:18.721205 25967 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:18.721229 25967 api_server.go:166] Checking apiserver status ...
I0804 00:39:18.721259 25967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:18.736058 25967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:18.746379 25967 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:18.746429 25967 ssh_runner.go:195] Run: ls
I0804 00:39:18.750833 25967 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:18.755000 25967 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:18.755021 25967 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:18.755029 25967 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:18.755046 25967 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:18.755408 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.755457 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.770168 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
I0804 00:39:18.770620 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.771073 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.771094 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.771408 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.771608 25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:18.773243 25967 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:18.773264 25967 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:18.773580 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.773614 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.788564 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
I0804 00:39:18.788985 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.789464 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.789486 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.789825 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.790021 25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:18.792979 25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:18.793396 25967 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:18.793431 25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:18.793575 25967 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:18.793878 25967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:18.793929 25967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:18.809117 25967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
I0804 00:39:18.809562 25967 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:18.809995 25967 main.go:141] libmachine: Using API Version 1
I0804 00:39:18.810013 25967 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:18.810342 25967 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:18.810546 25967 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:18.810747 25967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:18.810768 25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:18.813507 25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:18.813999 25967 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:18.814023 25967 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:18.814172 25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:18.814354 25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:18.814505 25967 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:18.814647 25967 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:18.893886 25967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:18.907678 25967 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (762.791458ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:21.045311 26066 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:21.045407 26066 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:21.045411 26066 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:21.045416 26066 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:21.045593 26066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:21.045744 26066 out.go:298] Setting JSON to false
I0804 00:39:21.045766 26066 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:21.045858 26066 notify.go:220] Checking for updates...
I0804 00:39:21.046204 26066 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:21.046220 26066 status.go:255] checking status of ha-230158 ...
I0804 00:39:21.046692 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.046745 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.062153 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
I0804 00:39:21.062592 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.063188 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.063211 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.063585 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.063775 26066 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:21.078972 26066 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:21.078990 26066 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:21.079274 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.079306 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.093889 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
I0804 00:39:21.094289 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.094749 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.094773 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.095126 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.095372 26066 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:21.098319 26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:21.098871 26066 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:21.098903 26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:21.099055 26066 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:21.099332 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.099370 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.114420 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
I0804 00:39:21.114780 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.115212 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.115232 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.115527 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.115763 26066 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:21.115939 26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:21.115970 26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:21.118755 26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:21.119175 26066 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:21.119203 26066 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:21.119325 26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:21.119498 26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:21.119742 26066 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:21.119895 26066 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:21.202043 26066 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:21.208451 26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:21.222056 26066 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:21.222078 26066 api_server.go:166] Checking apiserver status ...
I0804 00:39:21.222106 26066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:21.235933 26066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:21.246225 26066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:21.246292 26066 ssh_runner.go:195] Run: ls
I0804 00:39:21.252444 26066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:21.256611 26066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:21.256630 26066 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:21.256638 26066 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:21.256654 26066 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:21.256976 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.257011 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.271632 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
I0804 00:39:21.272048 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.272525 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.272552 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.272876 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.273042 26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:21.274436 26066 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:21.274453 26066 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:21.274941 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.274988 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.290540 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
I0804 00:39:21.290896 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.291360 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.291382 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.291682 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.291854 26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:21.294578 26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:21.294972 26066 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:21.294992 26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:21.295161 26066 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:21.295500 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.295543 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.309619 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
I0804 00:39:21.309957 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.310428 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.310447 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.310847 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.311054 26066 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:21.311246 26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:21.311264 26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:21.313773 26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:21.314177 26066 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:21.314205 26066 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:21.314335 26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:21.314481 26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:21.314645 26066 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:21.314809 26066 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:21.397122 26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:21.411805 26066 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:21.411831 26066 api_server.go:166] Checking apiserver status ...
I0804 00:39:21.411869 26066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:21.423600 26066 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:21.423618 26066 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:21.423628 26066 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:21.423644 26066 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:21.423961 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.424000 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.439785 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
I0804 00:39:21.440172 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.440702 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.440727 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.441003 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.441203 26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:21.443034 26066 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:21.443052 26066 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:21.443460 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.443504 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.457964 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
I0804 00:39:21.458398 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.458838 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.458862 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.459185 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.459386 26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:21.462057 26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:21.462610 26066 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:21.462637 26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:21.462802 26066 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:21.463175 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.463218 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.477914 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
I0804 00:39:21.478308 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.478749 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.478771 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.479070 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.479284 26066 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:21.479470 26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:21.479501 26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:21.482188 26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:21.482567 26066 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:21.482594 26066 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:21.482699 26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:21.482871 26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:21.482995 26066 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:21.483164 26066 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:21.562381 26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:21.578688 26066 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:21.578712 26066 api_server.go:166] Checking apiserver status ...
I0804 00:39:21.578743 26066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:21.595376 26066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:21.604847 26066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:21.604883 26066 ssh_runner.go:195] Run: ls
I0804 00:39:21.609454 26066 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:21.613915 26066 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:21.613938 26066 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:21.613949 26066 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:21.613967 26066 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:21.614321 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.614361 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.628680 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
I0804 00:39:21.629039 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.629484 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.629504 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.629773 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.629927 26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:21.631436 26066 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:21.631450 26066 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:21.631731 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.631783 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.646373 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
I0804 00:39:21.646782 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.647320 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.647348 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.647648 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.647849 26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:21.650277 26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:21.650773 26066 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:21.650807 26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:21.650967 26066 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:21.651243 26066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:21.651273 26066 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:21.665889 26066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
I0804 00:39:21.666278 26066 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:21.666715 26066 main.go:141] libmachine: Using API Version 1
I0804 00:39:21.666741 26066 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:21.667031 26066 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:21.667228 26066 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:21.667407 26066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:21.667425 26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:21.670130 26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:21.670550 26066 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:21.670570 26066 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:21.670720 26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:21.670914 26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:21.671054 26066 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:21.671197 26066 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:21.749941 26066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:21.765829 26066 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
E0804 00:39:23.753751 11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/functional-168863/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (763.502208ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:23.260878 26151 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:23.261149 26151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:23.261156 26151 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:23.261160 26151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:23.261393 26151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:23.261600 26151 out.go:298] Setting JSON to false
I0804 00:39:23.261622 26151 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:23.262010 26151 notify.go:220] Checking for updates...
I0804 00:39:23.263931 26151 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:23.263958 26151 status.go:255] checking status of ha-230158 ...
I0804 00:39:23.264536 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.264603 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.283993 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
I0804 00:39:23.284514 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.285154 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.285174 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.285513 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.285690 26151 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:23.287144 26151 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:23.287162 26151 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:23.287555 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.287597 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.301785 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
I0804 00:39:23.302105 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.302522 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.302541 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.302820 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.303029 26151 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:23.305959 26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:23.306486 26151 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:23.306517 26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:23.306825 26151 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:23.307196 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.307237 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.321371 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
I0804 00:39:23.321752 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.322159 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.322181 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.322511 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.322675 26151 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:23.322857 26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:23.322889 26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:23.325524 26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:23.325940 26151 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:23.325961 26151 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:23.326111 26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:23.326291 26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:23.326448 26151 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:23.326586 26151 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:23.413694 26151 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:23.420782 26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:23.434743 26151 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:23.434769 26151 api_server.go:166] Checking apiserver status ...
I0804 00:39:23.434803 26151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:23.450555 26151 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:23.459911 26151 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:23.459971 26151 ssh_runner.go:195] Run: ls
I0804 00:39:23.464899 26151 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:23.469215 26151 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:23.469240 26151 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:23.469257 26151 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:23.469276 26151 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:23.469633 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.469673 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.484185 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
I0804 00:39:23.484590 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.485013 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.485035 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.485405 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.485580 26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:23.487194 26151 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:23.487212 26151 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:23.487504 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.487540 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.501479 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
I0804 00:39:23.501788 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.502159 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.502179 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.502498 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.502665 26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:23.505216 26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:23.505730 26151 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:23.505753 26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:23.505887 26151 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:23.506167 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.506205 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.520520 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
I0804 00:39:23.520872 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.521304 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.521328 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.521627 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.521817 26151 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:23.521980 26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:23.521999 26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:23.524675 26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:23.525009 26151 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:23.525034 26151 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:23.525184 26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:23.525351 26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:23.525519 26151 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:23.525652 26151 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:23.609158 26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:23.622964 26151 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:23.622987 26151 api_server.go:166] Checking apiserver status ...
I0804 00:39:23.623022 26151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:23.634412 26151 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:23.634434 26151 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:23.634446 26151 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:23.634463 26151 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:23.634793 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.634836 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.651309 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
I0804 00:39:23.651765 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.652234 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.652258 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.652588 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.652794 26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:23.654197 26151 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:23.654214 26151 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:23.654528 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.654560 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.670593 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
I0804 00:39:23.670977 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.671446 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.671469 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.671770 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.671940 26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:23.674482 26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:23.674861 26151 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:23.674888 26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:23.675003 26151 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:23.675310 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.675353 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.690209 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
I0804 00:39:23.690676 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.691124 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.691141 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.691496 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.691698 26151 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:23.691922 26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:23.691941 26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:23.694503 26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:23.694916 26151 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:23.694947 26151 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:23.695051 26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:23.695200 26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:23.695348 26151 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:23.695460 26151 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:23.772490 26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:23.787137 26151 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:23.787165 26151 api_server.go:166] Checking apiserver status ...
I0804 00:39:23.787193 26151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:23.807552 26151 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:23.818133 26151 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:23.818186 26151 ssh_runner.go:195] Run: ls
I0804 00:39:23.822536 26151 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:23.828413 26151 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:23.828441 26151 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:23.828453 26151 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:23.828472 26151 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:23.828746 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.828780 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.846727 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
I0804 00:39:23.847145 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.847684 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.847703 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.847991 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.848180 26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:23.849882 26151 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:23.849897 26151 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:23.850191 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.850244 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.864944 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
I0804 00:39:23.865301 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.865723 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.865744 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.866058 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.866220 26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:23.869054 26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:23.869492 26151 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:23.869520 26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:23.869652 26151 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:23.869940 26151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:23.869991 26151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:23.884312 26151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
I0804 00:39:23.884710 26151 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:23.885224 26151 main.go:141] libmachine: Using API Version 1
I0804 00:39:23.885245 26151 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:23.885570 26151 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:23.885737 26151 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:23.885914 26151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:23.885933 26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:23.888463 26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:23.889010 26151 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:23.889034 26151 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:23.889195 26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:23.889349 26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:23.889631 26151 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:23.889816 26151 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:23.965324 26151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:23.980527 26151 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (774.189737ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:27.138816 26250 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:27.139074 26250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:27.139084 26250 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:27.139090 26250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:27.139309 26250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:27.139476 26250 out.go:298] Setting JSON to false
I0804 00:39:27.139504 26250 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:27.139610 26250 notify.go:220] Checking for updates...
I0804 00:39:27.139880 26250 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:27.139895 26250 status.go:255] checking status of ha-230158 ...
I0804 00:39:27.140258 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.140324 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.155658 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
I0804 00:39:27.156078 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.156589 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.156610 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.156925 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.157115 26250 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:27.158641 26250 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:27.158656 26250 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:27.159034 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.159074 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.178782 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
I0804 00:39:27.179180 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.179748 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.179779 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.180092 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.180261 26250 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:27.183185 26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:27.183579 26250 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:27.183617 26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:27.183737 26250 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:27.184073 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.184109 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.199041 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
I0804 00:39:27.199466 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.199906 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.199928 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.200221 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.200439 26250 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:27.200638 26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:27.200657 26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:27.203464 26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:27.204066 26250 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:27.204094 26250 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:27.204287 26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:27.204452 26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:27.204655 26250 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:27.204794 26250 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:27.286121 26250 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:27.292093 26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:27.307094 26250 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:27.307123 26250 api_server.go:166] Checking apiserver status ...
I0804 00:39:27.307151 26250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:27.321657 26250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:27.332371 26250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:27.332425 26250 ssh_runner.go:195] Run: ls
I0804 00:39:27.337061 26250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:27.344543 26250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:27.344574 26250 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:27.344589 26250 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:27.344612 26250 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:27.345038 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.345080 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.360753 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
I0804 00:39:27.361198 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.361624 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.361645 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.361919 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.362075 26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:27.364082 26250 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:27.364112 26250 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:27.364469 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.364503 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.379236 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
I0804 00:39:27.379598 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.380047 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.380060 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.380349 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.380511 26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:27.383356 26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:27.383786 26250 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:27.383813 26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:27.383948 26250 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:27.384289 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.384324 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.401922 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
I0804 00:39:27.402305 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.402814 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.402833 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.403108 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.403303 26250 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:27.403513 26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:27.403537 26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:27.406709 26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:27.407084 26250 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:27.407113 26250 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:27.407260 26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:27.407409 26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:27.407554 26250 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:27.407679 26250 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:27.493732 26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:27.508180 26250 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:27.508202 26250 api_server.go:166] Checking apiserver status ...
I0804 00:39:27.508228 26250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:27.520199 26250 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:27.520236 26250 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:27.520247 26250 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:27.520266 26250 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:27.520570 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.520602 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.535757 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
I0804 00:39:27.536245 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.536719 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.536776 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.537096 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.537264 26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:27.538903 26250 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:27.538922 26250 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:27.539283 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.539320 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.555353 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
I0804 00:39:27.555754 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.556149 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.556175 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.556512 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.556720 26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:27.559212 26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:27.559581 26250 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:27.559604 26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:27.559759 26250 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:27.560031 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.560067 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.575198 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
I0804 00:39:27.575619 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.576115 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.576139 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.576449 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.576677 26250 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:27.576888 26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:27.576910 26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:27.580158 26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:27.580533 26250 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:27.580551 26250 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:27.580734 26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:27.580899 26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:27.581052 26250 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:27.581182 26250 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:27.662758 26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:27.680424 26250 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:27.680449 26250 api_server.go:166] Checking apiserver status ...
I0804 00:39:27.680486 26250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:27.701083 26250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:27.711340 26250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:27.711400 26250 ssh_runner.go:195] Run: ls
I0804 00:39:27.715965 26250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:27.720188 26250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:27.720210 26250 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:27.720234 26250 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:27.720256 26250 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:27.720550 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.720591 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.735347 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
I0804 00:39:27.735771 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.736220 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.736240 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.736496 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.736656 26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:27.738223 26250 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:27.738248 26250 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:27.738545 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.738581 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.752752 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
I0804 00:39:27.753195 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.753629 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.753651 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.753956 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.754148 26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:27.757074 26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:27.757521 26250 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:27.757546 26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:27.757690 26250 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:27.758001 26250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:27.758035 26250 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:27.772705 26250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
I0804 00:39:27.773044 26250 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:27.773507 26250 main.go:141] libmachine: Using API Version 1
I0804 00:39:27.773529 26250 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:27.773817 26250 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:27.773963 26250 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:27.774148 26250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:27.774164 26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:27.776959 26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:27.777383 26250 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:27.777402 26250 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:27.777584 26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:27.777766 26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:27.777924 26250 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:27.778039 26250 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:27.856411 26250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:27.871513 26250 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (775.325921ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:32.683030 26351 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:32.683245 26351 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:32.683252 26351 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:32.683256 26351 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:32.683410 26351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:32.683567 26351 out.go:298] Setting JSON to false
I0804 00:39:32.683590 26351 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:32.683671 26351 notify.go:220] Checking for updates...
I0804 00:39:32.683915 26351 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:32.683928 26351 status.go:255] checking status of ha-230158 ...
I0804 00:39:32.684310 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:32.684366 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:32.703035 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
I0804 00:39:32.703385 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:32.704031 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:32.704069 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:32.704409 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:32.704622 26351 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:32.706255 26351 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:32.706278 26351 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:32.706544 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:32.706588 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:32.722941 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
I0804 00:39:32.723385 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:32.723829 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:32.723853 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:32.724146 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:32.724472 26351 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:32.727202 26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:32.727616 26351 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:32.727655 26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:32.727708 26351 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:32.727971 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:32.728001 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:32.742669 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
I0804 00:39:32.743072 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:32.743525 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:32.743550 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:32.743836 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:32.744085 26351 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:32.744373 26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:32.744396 26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:32.747286 26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:32.747678 26351 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:32.747700 26351 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:32.747815 26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:32.747978 26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:32.748120 26351 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:32.748270 26351 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:32.829911 26351 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:32.836641 26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:32.855108 26351 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:32.855136 26351 api_server.go:166] Checking apiserver status ...
I0804 00:39:32.855181 26351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:32.874091 26351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:32.887944 26351 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:32.887997 26351 ssh_runner.go:195] Run: ls
I0804 00:39:32.893211 26351 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:32.898133 26351 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:32.898156 26351 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:32.898170 26351 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:32.898200 26351 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:32.898579 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:32.898621 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:32.914153 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
I0804 00:39:32.914630 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:32.915126 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:32.915142 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:32.915449 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:32.915690 26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:32.917269 26351 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:32.917295 26351 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:32.917696 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:32.917736 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:32.933117 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
I0804 00:39:32.933558 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:32.934049 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:32.934069 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:32.934408 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:32.934594 26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:32.937847 26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:32.938398 26351 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:32.938425 26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:32.938592 26351 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:32.938923 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:32.938962 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:32.953162 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
I0804 00:39:32.953564 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:32.954044 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:32.954074 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:32.954380 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:32.954527 26351 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:32.954712 26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:32.954734 26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:32.957106 26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:32.957524 26351 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:32.957563 26351 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:32.957649 26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:32.957800 26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:32.957937 26351 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:32.958051 26351 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:33.041500 26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:33.056749 26351 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:33.056773 26351 api_server.go:166] Checking apiserver status ...
I0804 00:39:33.056803 26351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:33.069196 26351 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:33.069237 26351 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:33.069249 26351 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:33.069268 26351 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:33.069581 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:33.069636 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:33.085384 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
I0804 00:39:33.085770 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:33.086359 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:33.086381 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:33.086699 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:33.086880 26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:33.088340 26351 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:33.088355 26351 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:33.088649 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:33.088698 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:33.103742 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
I0804 00:39:33.104154 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:33.104587 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:33.104605 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:33.104940 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:33.105086 26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:33.108149 26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:33.108641 26351 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:33.108668 26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:33.108793 26351 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:33.109194 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:33.109237 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:33.124133 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
I0804 00:39:33.124482 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:33.125070 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:33.125086 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:33.125388 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:33.125586 26351 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:33.125779 26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:33.125805 26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:33.128457 26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:33.128836 26351 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:33.128869 26351 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:33.129029 26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:33.129184 26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:33.129354 26351 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:33.129490 26351 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:33.209193 26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:33.225012 26351 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:33.225041 26351 api_server.go:166] Checking apiserver status ...
I0804 00:39:33.225079 26351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:33.238913 26351 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:33.256898 26351 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:33.256937 26351 ssh_runner.go:195] Run: ls
I0804 00:39:33.260983 26351 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:33.265639 26351 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:33.265658 26351 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:33.265665 26351 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:33.265678 26351 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:33.265941 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:33.265971 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:33.281282 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
I0804 00:39:33.281668 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:33.282120 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:33.282140 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:33.282451 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:33.282629 26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:33.284085 26351 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:33.284100 26351 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:33.284496 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:33.284536 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:33.298411 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
I0804 00:39:33.298745 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:33.299170 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:33.299192 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:33.299527 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:33.299708 26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:33.302275 26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:33.302837 26351 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:33.302860 26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:33.302868 26351 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:33.303146 26351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:33.303177 26351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:33.317537 26351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
I0804 00:39:33.317891 26351 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:33.318312 26351 main.go:141] libmachine: Using API Version 1
I0804 00:39:33.318331 26351 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:33.318612 26351 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:33.318798 26351 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:33.318960 26351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:33.318978 26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:33.321630 26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:33.322071 26351 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:33.322098 26351 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:33.322273 26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:33.322435 26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:33.322578 26351 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:33.322722 26351 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:33.401533 26351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:33.416856 26351 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (760.485639ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:39:42.965570 26467 out.go:291] Setting OutFile to fd 1 ...
I0804 00:39:42.965793 26467 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:42.965802 26467 out.go:304] Setting ErrFile to fd 2...
I0804 00:39:42.965806 26467 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:39:42.965983 26467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:39:42.966138 26467 out.go:298] Setting JSON to false
I0804 00:39:42.966161 26467 mustload.go:65] Loading cluster: ha-230158
I0804 00:39:42.966265 26467 notify.go:220] Checking for updates...
I0804 00:39:42.966548 26467 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:39:42.966564 26467 status.go:255] checking status of ha-230158 ...
I0804 00:39:42.966934 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:42.966987 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:42.985623 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44931
I0804 00:39:42.985989 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:42.986537 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:42.986587 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:42.987005 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:42.987215 26467 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:39:42.989087 26467 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:39:42.989105 26467 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:42.989545 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:42.989591 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.008685 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
I0804 00:39:43.009130 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.009635 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.009659 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.010019 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.010192 26467 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:39:43.013316 26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:43.013803 26467 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:43.013830 26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:43.014016 26467 host.go:66] Checking if "ha-230158" exists ...
I0804 00:39:43.014452 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.014500 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.030130 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
I0804 00:39:43.030562 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.030955 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.030976 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.031311 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.031495 26467 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:39:43.031665 26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:43.031690 26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:39:43.034592 26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:43.035081 26467 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:39:43.035116 26467 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:39:43.035257 26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:39:43.035429 26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:39:43.035574 26467 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:39:43.035730 26467 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:39:43.118964 26467 ssh_runner.go:195] Run: systemctl --version
I0804 00:39:43.125566 26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:43.140717 26467 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:43.140753 26467 api_server.go:166] Checking apiserver status ...
I0804 00:39:43.140789 26467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:43.155035 26467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:39:43.165877 26467 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:43.165912 26467 ssh_runner.go:195] Run: ls
I0804 00:39:43.169933 26467 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:43.173992 26467 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:43.174009 26467 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:39:43.174018 26467 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:43.174030 26467 status.go:255] checking status of ha-230158-m02 ...
I0804 00:39:43.174337 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.174376 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.190386 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
I0804 00:39:43.190879 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.191469 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.191494 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.191814 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.192035 26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:39:43.193622 26467 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:39:43.193638 26467 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:43.193950 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.193993 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.208125 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
I0804 00:39:43.208570 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.209091 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.209110 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.209436 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.209612 26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:39:43.212323 26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:43.212761 26467 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:43.212785 26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:43.212956 26467 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:39:43.213291 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.213324 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.227103 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
I0804 00:39:43.227420 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.227829 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.227851 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.228138 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.228300 26467 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:39:43.228462 26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:43.228482 26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:39:43.231186 26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:43.231601 26467 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:39:43.231626 26467 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:39:43.231760 26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:39:43.231905 26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:39:43.232035 26467 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:39:43.232166 26467 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:39:43.317446 26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:43.334560 26467 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:43.334585 26467 api_server.go:166] Checking apiserver status ...
I0804 00:39:43.334622 26467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:39:43.348030 26467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:39:43.348052 26467 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:39:43.348062 26467 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:43.348078 26467 status.go:255] checking status of ha-230158-m03 ...
I0804 00:39:43.348414 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.348453 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.362963 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
I0804 00:39:43.363327 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.363817 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.363841 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.364175 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.364406 26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:39:43.365954 26467 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:39:43.365967 26467 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:43.366368 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.366408 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.380961 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
I0804 00:39:43.381363 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.381789 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.381816 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.382118 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.382321 26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:39:43.384941 26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:43.385396 26467 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:43.385423 26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:43.385561 26467 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:39:43.385954 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.385992 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.401063 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
I0804 00:39:43.401415 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.401786 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.401803 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.402155 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.402378 26467 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:39:43.402576 26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:43.402598 26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:39:43.405416 26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:43.405770 26467 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:39:43.405810 26467 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:39:43.405885 26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:39:43.406065 26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:39:43.406207 26467 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:39:43.406353 26467 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:39:43.486420 26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:43.502427 26467 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:39:43.502453 26467 api_server.go:166] Checking apiserver status ...
I0804 00:39:43.502494 26467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:39:43.515706 26467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:39:43.524825 26467 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:39:43.524861 26467 ssh_runner.go:195] Run: ls
I0804 00:39:43.529372 26467 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:39:43.533602 26467 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:39:43.533623 26467 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:39:43.533633 26467 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:39:43.533654 26467 status.go:255] checking status of ha-230158-m04 ...
I0804 00:39:43.533942 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.533978 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.549293 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
I0804 00:39:43.549660 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.550071 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.550086 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.550453 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.550671 26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:39:43.552228 26467 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:39:43.552243 26467 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:43.552540 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.552575 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.566723 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
I0804 00:39:43.567216 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.567685 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.567700 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.567995 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.568184 26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:39:43.571273 26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:43.571698 26467 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:43.571725 26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:43.571872 26467 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:39:43.572190 26467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:39:43.572226 26467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:39:43.586366 26467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
I0804 00:39:43.586817 26467 main.go:141] libmachine: () Calling .GetVersion
I0804 00:39:43.587337 26467 main.go:141] libmachine: Using API Version 1
I0804 00:39:43.587360 26467 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:39:43.587625 26467 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:39:43.587861 26467 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:39:43.588063 26467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:39:43.588083 26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:39:43.591237 26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:43.591732 26467 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:39:43.591760 26467 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:39:43.591961 26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:39:43.592150 26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:39:43.592422 26467 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:39:43.592582 26467 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:39:43.669969 26467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:39:43.684037 26467 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (774.497342ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:40:00.705983 26628 out.go:291] Setting OutFile to fd 1 ...
I0804 00:40:00.706100 26628 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:40:00.706107 26628 out.go:304] Setting ErrFile to fd 2...
I0804 00:40:00.706111 26628 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:40:00.706305 26628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:40:00.706532 26628 out.go:298] Setting JSON to false
I0804 00:40:00.706555 26628 mustload.go:65] Loading cluster: ha-230158
I0804 00:40:00.706595 26628 notify.go:220] Checking for updates...
I0804 00:40:00.706903 26628 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:40:00.706915 26628 status.go:255] checking status of ha-230158 ...
I0804 00:40:00.707342 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:00.707405 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:00.726813 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
I0804 00:40:00.727241 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:00.727920 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:00.727946 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:00.728300 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:00.728534 26628 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:40:00.730088 26628 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:40:00.730108 26628 host.go:66] Checking if "ha-230158" exists ...
I0804 00:40:00.730435 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:00.730476 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:00.744540 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
I0804 00:40:00.744965 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:00.745419 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:00.745458 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:00.745752 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:00.745938 26628 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:40:00.748744 26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:00.749207 26628 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:40:00.749226 26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:00.749364 26628 host.go:66] Checking if "ha-230158" exists ...
I0804 00:40:00.749735 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:00.749773 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:00.764020 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
I0804 00:40:00.764553 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:00.765033 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:00.765055 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:00.765400 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:00.765608 26628 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:40:00.765816 26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:00.765878 26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:40:00.768271 26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:00.768696 26628 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:40:00.768722 26628 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:00.768877 26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:40:00.769034 26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:40:00.769207 26628 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:40:00.769342 26628 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:40:00.859720 26628 ssh_runner.go:195] Run: systemctl --version
I0804 00:40:00.866803 26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:00.885397 26628 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:40:00.885424 26628 api_server.go:166] Checking apiserver status ...
I0804 00:40:00.885464 26628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:40:00.899989 26628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:40:00.909809 26628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:40:00.909862 26628 ssh_runner.go:195] Run: ls
I0804 00:40:00.914777 26628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:40:00.920715 26628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:40:00.920737 26628 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:40:00.920749 26628 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:40:00.920767 26628 status.go:255] checking status of ha-230158-m02 ...
I0804 00:40:00.921047 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:00.921089 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:00.938278 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
I0804 00:40:00.938671 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:00.939091 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:00.939111 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:00.939404 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:00.939599 26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:40:00.941542 26628 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:40:00.941555 26628 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:40:00.941822 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:00.941855 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:00.956049 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
I0804 00:40:00.956359 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:00.956828 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:00.956849 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:00.957211 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:00.957395 26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:40:00.959848 26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:00.960225 26628 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:40:00.960261 26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:00.960364 26628 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:40:00.960670 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:00.960700 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:00.975436 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
I0804 00:40:00.975758 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:00.976115 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:00.976132 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:00.976465 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:00.976659 26628 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:40:00.976838 26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:00.976857 26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:40:00.979160 26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:00.979636 26628 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:40:00.979660 26628 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:00.979784 26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:40:00.979938 26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:40:00.980132 26628 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:40:00.980288 26628 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:40:01.065427 26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:01.081150 26628 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:40:01.081171 26628 api_server.go:166] Checking apiserver status ...
I0804 00:40:01.081200 26628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:40:01.093834 26628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:40:01.093852 26628 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:40:01.093860 26628 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:40:01.093875 26628 status.go:255] checking status of ha-230158-m03 ...
I0804 00:40:01.094218 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:01.094279 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:01.109668 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
I0804 00:40:01.110093 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:01.110588 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:01.110610 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:01.110975 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:01.111157 26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:40:01.112853 26628 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:40:01.112872 26628 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:40:01.113236 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:01.113280 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:01.128092 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
I0804 00:40:01.128444 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:01.128881 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:01.128905 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:01.129186 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:01.129389 26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:40:01.132514 26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:01.133096 26628 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:40:01.133136 26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:01.133456 26628 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:40:01.133769 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:01.133809 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:01.149480 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
I0804 00:40:01.149905 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:01.150447 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:01.150472 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:01.150749 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:01.150970 26628 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:40:01.151153 26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:01.151174 26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:40:01.154041 26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:01.154547 26628 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:40:01.154583 26628 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:01.154709 26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:40:01.154897 26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:40:01.155063 26628 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:40:01.155211 26628 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:40:01.234354 26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:01.252060 26628 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:40:01.252092 26628 api_server.go:166] Checking apiserver status ...
I0804 00:40:01.252132 26628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:40:01.267481 26628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:40:01.276924 26628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:40:01.276966 26628 ssh_runner.go:195] Run: ls
I0804 00:40:01.281584 26628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:40:01.285812 26628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:40:01.285836 26628 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:40:01.285847 26628 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:40:01.285865 26628 status.go:255] checking status of ha-230158-m04 ...
I0804 00:40:01.286148 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:01.286182 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:01.301131 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
I0804 00:40:01.301565 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:01.302003 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:01.302022 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:01.302342 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:01.302535 26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:40:01.303874 26628 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:40:01.303895 26628 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:40:01.304211 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:01.304246 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:01.318453 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
I0804 00:40:01.318796 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:01.319251 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:01.319270 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:01.319562 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:01.319764 26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:40:01.322395 26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:01.322776 26628 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:40:01.322811 26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:01.322944 26628 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:40:01.323336 26628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:01.323402 26628 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:01.337881 26628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
I0804 00:40:01.338340 26628 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:01.338784 26628 main.go:141] libmachine: Using API Version 1
I0804 00:40:01.338806 26628 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:01.339157 26628 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:01.339363 26628 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:40:01.339554 26628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:01.339583 26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:40:01.342103 26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:01.342523 26628 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:40:01.342560 26628 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:01.342715 26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:40:01.342891 26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:40:01.343046 26628 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:40:01.343227 26628 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:40:01.421977 26628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:01.436806 26628 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
E0804 00:40:06.990215 11136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/addons-044946/client.crt: no such file or directory
ha_test.go:428: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr: exit status 2 (762.939021ms)
-- stdout --
ha-230158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m02
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
ha-230158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ha-230158-m04
type: Worker
host: Running
kubelet: Running
-- /stdout --
** stderr **
I0804 00:40:13.625178 26761 out.go:291] Setting OutFile to fd 1 ...
I0804 00:40:13.625402 26761 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:40:13.625410 26761 out.go:304] Setting ErrFile to fd 2...
I0804 00:40:13.625414 26761 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:40:13.625563 26761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:40:13.625702 26761 out.go:298] Setting JSON to false
I0804 00:40:13.625723 26761 mustload.go:65] Loading cluster: ha-230158
I0804 00:40:13.625757 26761 notify.go:220] Checking for updates...
I0804 00:40:13.626138 26761 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:40:13.626156 26761 status.go:255] checking status of ha-230158 ...
I0804 00:40:13.626581 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:13.626639 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:13.647711 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
I0804 00:40:13.648129 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:13.648713 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:13.648734 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:13.649163 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:13.649485 26761 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:40:13.651207 26761 status.go:330] ha-230158 host status = "Running" (err=<nil>)
I0804 00:40:13.651221 26761 host.go:66] Checking if "ha-230158" exists ...
I0804 00:40:13.651538 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:13.651581 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:13.665791 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
I0804 00:40:13.666179 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:13.666689 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:13.666711 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:13.666996 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:13.667185 26761 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:40:13.670065 26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:13.670539 26761 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:40:13.670563 26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:13.670679 26761 host.go:66] Checking if "ha-230158" exists ...
I0804 00:40:13.670930 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:13.670971 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:13.685230 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
I0804 00:40:13.685550 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:13.685980 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:13.685998 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:13.686311 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:13.686504 26761 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:40:13.686677 26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:13.686695 26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:40:13.689125 26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:13.689494 26761 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:40:13.689513 26761 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:40:13.689645 26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:40:13.689835 26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:40:13.690037 26761 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:40:13.690196 26761 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:40:13.773913 26761 ssh_runner.go:195] Run: systemctl --version
I0804 00:40:13.781062 26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:13.797485 26761 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:40:13.797513 26761 api_server.go:166] Checking apiserver status ...
I0804 00:40:13.797545 26761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:40:13.812929 26761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup
W0804 00:40:13.823126 26761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2004/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:40:13.823184 26761 ssh_runner.go:195] Run: ls
I0804 00:40:13.827659 26761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:40:13.833525 26761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:40:13.833544 26761 status.go:422] ha-230158 apiserver status = Running (err=<nil>)
I0804 00:40:13.833552 26761 status.go:257] ha-230158 status: &{Name:ha-230158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:40:13.833567 26761 status.go:255] checking status of ha-230158-m02 ...
I0804 00:40:13.833867 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:13.833905 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:13.848575 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
I0804 00:40:13.848917 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:13.849357 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:13.849378 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:13.849675 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:13.849861 26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:40:13.851329 26761 status.go:330] ha-230158-m02 host status = "Running" (err=<nil>)
I0804 00:40:13.851346 26761 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:40:13.851663 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:13.851697 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:13.866462 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
I0804 00:40:13.866850 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:13.867315 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:13.867340 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:13.867600 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:13.867782 26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:40:13.870119 26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:13.870552 26761 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:40:13.870579 26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:13.870801 26761 host.go:66] Checking if "ha-230158-m02" exists ...
I0804 00:40:13.871116 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:13.871149 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:13.885387 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
I0804 00:40:13.885707 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:13.886151 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:13.886171 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:13.886538 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:13.886819 26761 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:40:13.887052 26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:13.887073 26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:40:13.889724 26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:13.890148 26761 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:40:13.890170 26761 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:40:13.890343 26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:40:13.890541 26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:40:13.890797 26761 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:40:13.890985 26761 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:40:13.973179 26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:13.988079 26761 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:40:13.988102 26761 api_server.go:166] Checking apiserver status ...
I0804 00:40:13.988132 26761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0804 00:40:14.000664 26761 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0804 00:40:14.000688 26761 status.go:422] ha-230158-m02 apiserver status = Stopped (err=<nil>)
I0804 00:40:14.000696 26761 status.go:257] ha-230158-m02 status: &{Name:ha-230158-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:40:14.000709 26761 status.go:255] checking status of ha-230158-m03 ...
I0804 00:40:14.001144 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:14.001186 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:14.017077 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41765
I0804 00:40:14.017531 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:14.017967 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:14.017986 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:14.018313 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:14.018479 26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:40:14.020146 26761 status.go:330] ha-230158-m03 host status = "Running" (err=<nil>)
I0804 00:40:14.020161 26761 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:40:14.020444 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:14.020495 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:14.035027 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
I0804 00:40:14.035357 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:14.035793 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:14.035813 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:14.036130 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:14.036283 26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:40:14.039002 26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:14.039526 26761 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:40:14.039571 26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:14.039694 26761 host.go:66] Checking if "ha-230158-m03" exists ...
I0804 00:40:14.040037 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:14.040069 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:14.055218 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
I0804 00:40:14.055638 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:14.056054 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:14.056072 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:14.056374 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:14.056527 26761 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:40:14.056712 26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:14.056729 26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:40:14.059034 26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:14.059422 26761 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:40:14.059458 26761 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:40:14.059601 26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:40:14.059749 26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:40:14.059918 26761 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:40:14.060057 26761 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:40:14.138356 26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:14.156650 26761 kubeconfig.go:125] found "ha-230158" server: "https://192.168.39.254:8443"
I0804 00:40:14.156675 26761 api_server.go:166] Checking apiserver status ...
I0804 00:40:14.156703 26761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:40:14.173050 26761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
W0804 00:40:14.182584 26761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
stdout:
stderr:
I0804 00:40:14.182640 26761 ssh_runner.go:195] Run: ls
I0804 00:40:14.187124 26761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0804 00:40:14.196154 26761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0804 00:40:14.196188 26761 status.go:422] ha-230158-m03 apiserver status = Running (err=<nil>)
I0804 00:40:14.196200 26761 status.go:257] ha-230158-m03 status: &{Name:ha-230158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
I0804 00:40:14.196226 26761 status.go:255] checking status of ha-230158-m04 ...
I0804 00:40:14.196556 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:14.196593 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:14.211442 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
I0804 00:40:14.211868 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:14.212341 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:14.212369 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:14.212697 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:14.212874 26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetState
I0804 00:40:14.214455 26761 status.go:330] ha-230158-m04 host status = "Running" (err=<nil>)
I0804 00:40:14.214486 26761 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:40:14.214796 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:14.214835 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:14.229872 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
I0804 00:40:14.230263 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:14.230721 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:14.230740 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:14.231029 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:14.231192 26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetIP
I0804 00:40:14.233848 26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:14.234415 26761 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:40:14.234465 26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:14.234622 26761 host.go:66] Checking if "ha-230158-m04" exists ...
I0804 00:40:14.234957 26761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:40:14.234990 26761 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:40:14.251015 26761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
I0804 00:40:14.251440 26761 main.go:141] libmachine: () Calling .GetVersion
I0804 00:40:14.251845 26761 main.go:141] libmachine: Using API Version 1
I0804 00:40:14.251863 26761 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:40:14.252161 26761 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:40:14.252336 26761 main.go:141] libmachine: (ha-230158-m04) Calling .DriverName
I0804 00:40:14.252511 26761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0804 00:40:14.252528 26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHHostname
I0804 00:40:14.254949 26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:14.255269 26761 main.go:141] libmachine: (ha-230158-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:b8:2e", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:42 +0000 UTC Type:0 Mac:52:54:00:eb:b8:2e Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-230158-m04 Clientid:01:52:54:00:eb:b8:2e}
I0804 00:40:14.255286 26761 main.go:141] libmachine: (ha-230158-m04) DBG | domain ha-230158-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:eb:b8:2e in network mk-ha-230158
I0804 00:40:14.255427 26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHPort
I0804 00:40:14.255582 26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHKeyPath
I0804 00:40:14.255727 26761 main.go:141] libmachine: (ha-230158-m04) Calling .GetSSHUsername
I0804 00:40:14.255855 26761 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m04/id_rsa Username:docker}
I0804 00:40:14.333822 26761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:40:14.348585 26761 status.go:257] ha-230158-m04 status: &{Name:ha-230158-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-230158 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p ha-230158 -n ha-230158
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p ha-230158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-230158 logs -n 25: (1.133054616s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158:/home/docker/cp-test_ha-230158-m03_ha-230158.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n ha-230158 sudo cat | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /home/docker/cp-test_ha-230158-m03_ha-230158.txt | | | | | |
| cp | ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m02:/home/docker/cp-test_ha-230158-m03_ha-230158-m02.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n ha-230158-m02 sudo cat | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /home/docker/cp-test_ha-230158-m03_ha-230158-m02.txt | | | | | |
| cp | ha-230158 cp ha-230158-m03:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04:/home/docker/cp-test_ha-230158-m03_ha-230158-m04.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n ha-230158-m04 sudo cat | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /home/docker/cp-test_ha-230158-m03_ha-230158-m04.txt | | | | | |
| cp | ha-230158 cp testdata/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04:/home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /tmp/TestMultiControlPlaneserialCopyFile571222237/001/cp-test_ha-230158-m04.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158:/home/docker/cp-test_ha-230158-m04_ha-230158.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n ha-230158 sudo cat | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /home/docker/cp-test_ha-230158-m04_ha-230158.txt | | | | | |
| cp | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m02:/home/docker/cp-test_ha-230158-m04_ha-230158-m02.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n ha-230158-m02 sudo cat | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /home/docker/cp-test_ha-230158-m04_ha-230158-m02.txt | | | | | |
| cp | ha-230158 cp ha-230158-m04:/home/docker/cp-test.txt | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m03:/home/docker/cp-test_ha-230158-m04_ha-230158-m03.txt | | | | | |
| ssh | ha-230158 ssh -n | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | ha-230158-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-230158 ssh -n ha-230158-m03 sudo cat | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | /home/docker/cp-test_ha-230158-m04_ha-230158-m03.txt | | | | | |
| node | ha-230158 node stop m02 -v=7 | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
| | --alsologtostderr | | | | | |
| node | ha-230158 node start m02 -v=7 | ha-230158 | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | |
| | --alsologtostderr | | | | | |
|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/04 00:32:30
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0804 00:32:30.855673 21140 out.go:291] Setting OutFile to fd 1 ...
I0804 00:32:30.855914 21140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:30.855922 21140 out.go:304] Setting ErrFile to fd 2...
I0804 00:32:30.855926 21140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 00:32:30.856094 21140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-3947/.minikube/bin
I0804 00:32:30.856624 21140 out.go:298] Setting JSON to false
I0804 00:32:30.857452 21140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":901,"bootTime":1722730650,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0804 00:32:30.857503 21140 start.go:139] virtualization: kvm guest
I0804 00:32:30.859407 21140 out.go:177] * [ha-230158] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0804 00:32:30.860777 21140 notify.go:220] Checking for updates...
I0804 00:32:30.860790 21140 out.go:177] - MINIKUBE_LOCATION=19364
I0804 00:32:30.862263 21140 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0804 00:32:30.863516 21140 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19364-3947/kubeconfig
I0804 00:32:30.864678 21140 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:32:30.865850 21140 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0804 00:32:30.867244 21140 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0804 00:32:30.868638 21140 driver.go:392] Setting default libvirt URI to qemu:///system
I0804 00:32:30.902700 21140 out.go:177] * Using the kvm2 driver based on user configuration
I0804 00:32:30.903896 21140 start.go:297] selected driver: kvm2
I0804 00:32:30.903910 21140 start.go:901] validating driver "kvm2" against <nil>
I0804 00:32:30.903929 21140 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0804 00:32:30.904664 21140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0804 00:32:30.904725 21140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-3947/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0804 00:32:30.920763 21140 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0804 00:32:30.920824 21140 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0804 00:32:30.921056 21140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0804 00:32:30.921140 21140 cni.go:84] Creating CNI manager for ""
I0804 00:32:30.921155 21140 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0804 00:32:30.921162 21140 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0804 00:32:30.921247 21140 start.go:340] cluster config:
{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
I0804 00:32:30.921381 21140 iso.go:125] acquiring lock: {Name:mk61d89caa127145c801001852615ed27862a97f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0804 00:32:30.923111 21140 out.go:177] * Starting "ha-230158" primary control-plane node in "ha-230158" cluster
I0804 00:32:30.924560 21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:32:30.924602 21140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0804 00:32:30.924614 21140 cache.go:56] Caching tarball of preloaded images
I0804 00:32:30.925772 21140 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0804 00:32:30.925794 21140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0804 00:32:30.926310 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:32:30.926344 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json: {Name:mk27b5858edb4d8a82fada41a2f7df8a81efcd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:32:30.926532 21140 start.go:360] acquireMachinesLock for ha-230158: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0804 00:32:30.926583 21140 start.go:364] duration metric: took 25.422µs to acquireMachinesLock for "ha-230158"
I0804 00:32:30.926607 21140 start.go:93] Provisioning new machine with config: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:32:30.926689 21140 start.go:125] createHost starting for "" (driver="kvm2")
I0804 00:32:30.928257 21140 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0804 00:32:30.928406 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:32:30.928460 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:32:30.942414 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
I0804 00:32:30.942878 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:32:30.943469 21140 main.go:141] libmachine: Using API Version 1
I0804 00:32:30.943490 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:32:30.943821 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:32:30.943988 21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
I0804 00:32:30.944139 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:30.944285 21140 start.go:159] libmachine.API.Create for "ha-230158" (driver="kvm2")
I0804 00:32:30.944309 21140 client.go:168] LocalClient.Create starting
I0804 00:32:30.944336 21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem
I0804 00:32:30.944365 21140 main.go:141] libmachine: Decoding PEM data...
I0804 00:32:30.944378 21140 main.go:141] libmachine: Parsing certificate...
I0804 00:32:30.944432 21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem
I0804 00:32:30.944453 21140 main.go:141] libmachine: Decoding PEM data...
I0804 00:32:30.944465 21140 main.go:141] libmachine: Parsing certificate...
I0804 00:32:30.944480 21140 main.go:141] libmachine: Running pre-create checks...
I0804 00:32:30.944489 21140 main.go:141] libmachine: (ha-230158) Calling .PreCreateCheck
I0804 00:32:30.944788 21140 main.go:141] libmachine: (ha-230158) Calling .GetConfigRaw
I0804 00:32:30.945189 21140 main.go:141] libmachine: Creating machine...
I0804 00:32:30.945217 21140 main.go:141] libmachine: (ha-230158) Calling .Create
I0804 00:32:30.945352 21140 main.go:141] libmachine: (ha-230158) Creating KVM machine...
I0804 00:32:30.946565 21140 main.go:141] libmachine: (ha-230158) DBG | found existing default KVM network
I0804 00:32:30.947248 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:30.947097 21164 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
I0804 00:32:30.947281 21140 main.go:141] libmachine: (ha-230158) DBG | created network xml:
I0804 00:32:30.947299 21140 main.go:141] libmachine: (ha-230158) DBG | <network>
I0804 00:32:30.947308 21140 main.go:141] libmachine: (ha-230158) DBG | <name>mk-ha-230158</name>
I0804 00:32:30.947316 21140 main.go:141] libmachine: (ha-230158) DBG | <dns enable='no'/>
I0804 00:32:30.947323 21140 main.go:141] libmachine: (ha-230158) DBG |
I0804 00:32:30.947335 21140 main.go:141] libmachine: (ha-230158) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0804 00:32:30.947344 21140 main.go:141] libmachine: (ha-230158) DBG | <dhcp>
I0804 00:32:30.947354 21140 main.go:141] libmachine: (ha-230158) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0804 00:32:30.947367 21140 main.go:141] libmachine: (ha-230158) DBG | </dhcp>
I0804 00:32:30.947404 21140 main.go:141] libmachine: (ha-230158) DBG | </ip>
I0804 00:32:30.947418 21140 main.go:141] libmachine: (ha-230158) DBG |
I0804 00:32:30.947424 21140 main.go:141] libmachine: (ha-230158) DBG | </network>
I0804 00:32:30.947429 21140 main.go:141] libmachine: (ha-230158) DBG |
I0804 00:32:30.952537 21140 main.go:141] libmachine: (ha-230158) DBG | trying to create private KVM network mk-ha-230158 192.168.39.0/24...
I0804 00:32:31.015570 21140 main.go:141] libmachine: (ha-230158) DBG | private KVM network mk-ha-230158 192.168.39.0/24 created
I0804 00:32:31.015600 21140 main.go:141] libmachine: (ha-230158) Setting up store path in /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158 ...
I0804 00:32:31.015614 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.015548 21164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:32:31.015633 21140 main.go:141] libmachine: (ha-230158) Building disk image from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
I0804 00:32:31.015705 21140 main.go:141] libmachine: (ha-230158) Downloading /home/jenkins/minikube-integration/19364-3947/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
I0804 00:32:31.252936 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.252797 21164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa...
I0804 00:32:31.559361 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.559217 21164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/ha-230158.rawdisk...
I0804 00:32:31.559386 21140 main.go:141] libmachine: (ha-230158) DBG | Writing magic tar header
I0804 00:32:31.559396 21140 main.go:141] libmachine: (ha-230158) DBG | Writing SSH key tar header
I0804 00:32:31.559404 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:31.559340 21164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158 ...
I0804 00:32:31.559525 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158
I0804 00:32:31.559557 21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158 (perms=drwx------)
I0804 00:32:31.559573 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines
I0804 00:32:31.559638 21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines (perms=drwxr-xr-x)
I0804 00:32:31.559673 21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube (perms=drwxr-xr-x)
I0804 00:32:31.559685 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:32:31.559705 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947
I0804 00:32:31.559718 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0804 00:32:31.559732 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home/jenkins
I0804 00:32:31.559747 21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947 (perms=drwxrwxr-x)
I0804 00:32:31.559760 21140 main.go:141] libmachine: (ha-230158) DBG | Checking permissions on dir: /home
I0804 00:32:31.559783 21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0804 00:32:31.559796 21140 main.go:141] libmachine: (ha-230158) DBG | Skipping /home - not owner
I0804 00:32:31.559814 21140 main.go:141] libmachine: (ha-230158) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0804 00:32:31.559825 21140 main.go:141] libmachine: (ha-230158) Creating domain...
I0804 00:32:31.561006 21140 main.go:141] libmachine: (ha-230158) define libvirt domain using xml:
I0804 00:32:31.561039 21140 main.go:141] libmachine: (ha-230158) <domain type='kvm'>
I0804 00:32:31.561049 21140 main.go:141] libmachine: (ha-230158) <name>ha-230158</name>
I0804 00:32:31.561057 21140 main.go:141] libmachine: (ha-230158) <memory unit='MiB'>2200</memory>
I0804 00:32:31.561067 21140 main.go:141] libmachine: (ha-230158) <vcpu>2</vcpu>
I0804 00:32:31.561072 21140 main.go:141] libmachine: (ha-230158) <features>
I0804 00:32:31.561082 21140 main.go:141] libmachine: (ha-230158) <acpi/>
I0804 00:32:31.561086 21140 main.go:141] libmachine: (ha-230158) <apic/>
I0804 00:32:31.561092 21140 main.go:141] libmachine: (ha-230158) <pae/>
I0804 00:32:31.561101 21140 main.go:141] libmachine: (ha-230158)
I0804 00:32:31.561106 21140 main.go:141] libmachine: (ha-230158) </features>
I0804 00:32:31.561111 21140 main.go:141] libmachine: (ha-230158) <cpu mode='host-passthrough'>
I0804 00:32:31.561119 21140 main.go:141] libmachine: (ha-230158)
I0804 00:32:31.561123 21140 main.go:141] libmachine: (ha-230158) </cpu>
I0804 00:32:31.561146 21140 main.go:141] libmachine: (ha-230158) <os>
I0804 00:32:31.561165 21140 main.go:141] libmachine: (ha-230158) <type>hvm</type>
I0804 00:32:31.561172 21140 main.go:141] libmachine: (ha-230158) <boot dev='cdrom'/>
I0804 00:32:31.561187 21140 main.go:141] libmachine: (ha-230158) <boot dev='hd'/>
I0804 00:32:31.561198 21140 main.go:141] libmachine: (ha-230158) <bootmenu enable='no'/>
I0804 00:32:31.561213 21140 main.go:141] libmachine: (ha-230158) </os>
I0804 00:32:31.561219 21140 main.go:141] libmachine: (ha-230158) <devices>
I0804 00:32:31.561226 21140 main.go:141] libmachine: (ha-230158) <disk type='file' device='cdrom'>
I0804 00:32:31.561235 21140 main.go:141] libmachine: (ha-230158) <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/boot2docker.iso'/>
I0804 00:32:31.561244 21140 main.go:141] libmachine: (ha-230158) <target dev='hdc' bus='scsi'/>
I0804 00:32:31.561249 21140 main.go:141] libmachine: (ha-230158) <readonly/>
I0804 00:32:31.561256 21140 main.go:141] libmachine: (ha-230158) </disk>
I0804 00:32:31.561261 21140 main.go:141] libmachine: (ha-230158) <disk type='file' device='disk'>
I0804 00:32:31.561272 21140 main.go:141] libmachine: (ha-230158) <driver name='qemu' type='raw' cache='default' io='threads' />
I0804 00:32:31.561280 21140 main.go:141] libmachine: (ha-230158) <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/ha-230158.rawdisk'/>
I0804 00:32:31.561285 21140 main.go:141] libmachine: (ha-230158) <target dev='hda' bus='virtio'/>
I0804 00:32:31.561292 21140 main.go:141] libmachine: (ha-230158) </disk>
I0804 00:32:31.561296 21140 main.go:141] libmachine: (ha-230158) <interface type='network'>
I0804 00:32:31.561305 21140 main.go:141] libmachine: (ha-230158) <source network='mk-ha-230158'/>
I0804 00:32:31.561309 21140 main.go:141] libmachine: (ha-230158) <model type='virtio'/>
I0804 00:32:31.561334 21140 main.go:141] libmachine: (ha-230158) </interface>
I0804 00:32:31.561356 21140 main.go:141] libmachine: (ha-230158) <interface type='network'>
I0804 00:32:31.561367 21140 main.go:141] libmachine: (ha-230158) <source network='default'/>
I0804 00:32:31.561377 21140 main.go:141] libmachine: (ha-230158) <model type='virtio'/>
I0804 00:32:31.561386 21140 main.go:141] libmachine: (ha-230158) </interface>
I0804 00:32:31.561396 21140 main.go:141] libmachine: (ha-230158) <serial type='pty'>
I0804 00:32:31.561405 21140 main.go:141] libmachine: (ha-230158) <target port='0'/>
I0804 00:32:31.561412 21140 main.go:141] libmachine: (ha-230158) </serial>
I0804 00:32:31.561418 21140 main.go:141] libmachine: (ha-230158) <console type='pty'>
I0804 00:32:31.561433 21140 main.go:141] libmachine: (ha-230158) <target type='serial' port='0'/>
I0804 00:32:31.561445 21140 main.go:141] libmachine: (ha-230158) </console>
I0804 00:32:31.561455 21140 main.go:141] libmachine: (ha-230158) <rng model='virtio'>
I0804 00:32:31.561464 21140 main.go:141] libmachine: (ha-230158) <backend model='random'>/dev/random</backend>
I0804 00:32:31.561482 21140 main.go:141] libmachine: (ha-230158) </rng>
I0804 00:32:31.561492 21140 main.go:141] libmachine: (ha-230158)
I0804 00:32:31.561496 21140 main.go:141] libmachine: (ha-230158)
I0804 00:32:31.561505 21140 main.go:141] libmachine: (ha-230158) </devices>
I0804 00:32:31.561515 21140 main.go:141] libmachine: (ha-230158) </domain>
I0804 00:32:31.561527 21140 main.go:141] libmachine: (ha-230158)
I0804 00:32:31.565606 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:0e:1a:c8 in network default
I0804 00:32:31.566145 21140 main.go:141] libmachine: (ha-230158) Ensuring networks are active...
I0804 00:32:31.566160 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:31.566849 21140 main.go:141] libmachine: (ha-230158) Ensuring network default is active
I0804 00:32:31.567149 21140 main.go:141] libmachine: (ha-230158) Ensuring network mk-ha-230158 is active
I0804 00:32:31.567594 21140 main.go:141] libmachine: (ha-230158) Getting domain xml...
I0804 00:32:31.568314 21140 main.go:141] libmachine: (ha-230158) Creating domain...
I0804 00:32:32.752103 21140 main.go:141] libmachine: (ha-230158) Waiting to get IP...
I0804 00:32:32.752842 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:32.753189 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:32.753225 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:32.753166 21164 retry.go:31] will retry after 301.695034ms: waiting for machine to come up
I0804 00:32:33.056838 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:33.057343 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:33.057374 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:33.057292 21164 retry.go:31] will retry after 345.614204ms: waiting for machine to come up
I0804 00:32:33.405071 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:33.405512 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:33.405539 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:33.405464 21164 retry.go:31] will retry after 316.091612ms: waiting for machine to come up
I0804 00:32:33.723721 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:33.724168 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:33.724194 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:33.724122 21164 retry.go:31] will retry after 558.911264ms: waiting for machine to come up
I0804 00:32:34.284352 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:34.284769 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:34.284790 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:34.284728 21164 retry.go:31] will retry after 465.210228ms: waiting for machine to come up
I0804 00:32:34.751423 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:34.751758 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:34.751786 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:34.751726 21164 retry.go:31] will retry after 609.962342ms: waiting for machine to come up
I0804 00:32:35.363533 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:35.363913 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:35.363947 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:35.363876 21164 retry.go:31] will retry after 731.983307ms: waiting for machine to come up
I0804 00:32:36.097612 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:36.098025 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:36.098052 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:36.097985 21164 retry.go:31] will retry after 1.047630115s: waiting for machine to come up
I0804 00:32:37.147182 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:37.147727 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:37.147766 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:37.147690 21164 retry.go:31] will retry after 1.221202371s: waiting for machine to come up
I0804 00:32:38.371009 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:38.371502 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:38.371531 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:38.371443 21164 retry.go:31] will retry after 2.01003947s: waiting for machine to come up
I0804 00:32:40.384779 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:40.385213 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:40.385237 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:40.385155 21164 retry.go:31] will retry after 2.043530448s: waiting for machine to come up
I0804 00:32:42.430015 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:42.430527 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:42.430553 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:42.430486 21164 retry.go:31] will retry after 2.637093898s: waiting for machine to come up
I0804 00:32:45.071390 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:45.071939 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:45.071962 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:45.071899 21164 retry.go:31] will retry after 3.860426233s: waiting for machine to come up
I0804 00:32:48.936168 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:48.936555 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find current IP address of domain ha-230158 in network mk-ha-230158
I0804 00:32:48.936574 21140 main.go:141] libmachine: (ha-230158) DBG | I0804 00:32:48.936510 21164 retry.go:31] will retry after 5.157668556s: waiting for machine to come up
I0804 00:32:54.097780 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.098254 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has current primary IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.098278 21140 main.go:141] libmachine: (ha-230158) Found IP for machine: 192.168.39.132
I0804 00:32:54.098291 21140 main.go:141] libmachine: (ha-230158) Reserving static IP address...
I0804 00:32:54.098729 21140 main.go:141] libmachine: (ha-230158) DBG | unable to find host DHCP lease matching {name: "ha-230158", mac: "52:54:00:a9:92:75", ip: "192.168.39.132"} in network mk-ha-230158
I0804 00:32:54.167146 21140 main.go:141] libmachine: (ha-230158) DBG | Getting to WaitForSSH function...
I0804 00:32:54.167178 21140 main.go:141] libmachine: (ha-230158) Reserved static IP address: 192.168.39.132
I0804 00:32:54.167210 21140 main.go:141] libmachine: (ha-230158) Waiting for SSH to be available...
I0804 00:32:54.169968 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.170456 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.170483 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.170647 21140 main.go:141] libmachine: (ha-230158) DBG | Using SSH client type: external
I0804 00:32:54.170673 21140 main.go:141] libmachine: (ha-230158) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa (-rw-------)
I0804 00:32:54.170698 21140 main.go:141] libmachine: (ha-230158) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:32:54.170707 21140 main.go:141] libmachine: (ha-230158) DBG | About to run SSH command:
I0804 00:32:54.170724 21140 main.go:141] libmachine: (ha-230158) DBG | exit 0
I0804 00:32:54.294070 21140 main.go:141] libmachine: (ha-230158) DBG | SSH cmd err, output: <nil>:
I0804 00:32:54.294349 21140 main.go:141] libmachine: (ha-230158) KVM machine creation complete!
I0804 00:32:54.294681 21140 main.go:141] libmachine: (ha-230158) Calling .GetConfigRaw
I0804 00:32:54.295181 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:54.295461 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:54.295648 21140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0804 00:32:54.295663 21140 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:32:54.296807 21140 main.go:141] libmachine: Detecting operating system of created instance...
I0804 00:32:54.296822 21140 main.go:141] libmachine: Waiting for SSH to be available...
I0804 00:32:54.296827 21140 main.go:141] libmachine: Getting to WaitForSSH function...
I0804 00:32:54.296832 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.299017 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.299319 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.299341 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.299424 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:54.299607 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.299762 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.299937 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:54.300060 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:54.300244 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:54.300256 21140 main.go:141] libmachine: About to run SSH command:
exit 0
I0804 00:32:54.405542 21140 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:32:54.405565 21140 main.go:141] libmachine: Detecting the provisioner...
I0804 00:32:54.405575 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.407782 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.408139 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.408168 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.408286 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:54.408492 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.408647 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.408783 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:54.408938 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:54.409095 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:54.409105 21140 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0804 00:32:54.514801 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0804 00:32:54.514871 21140 main.go:141] libmachine: found compatible host: buildroot
I0804 00:32:54.514884 21140 main.go:141] libmachine: Provisioning with buildroot...
I0804 00:32:54.514896 21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
I0804 00:32:54.515131 21140 buildroot.go:166] provisioning hostname "ha-230158"
I0804 00:32:54.515160 21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
I0804 00:32:54.515363 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.517892 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.518220 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.518267 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.518438 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:54.518621 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.518792 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.518962 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:54.519189 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:54.519366 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:54.519384 21140 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-230158 && echo "ha-230158" | sudo tee /etc/hostname
I0804 00:32:54.640261 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158
I0804 00:32:54.640282 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.642938 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.643365 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.643386 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.643520 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:54.643683 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.643833 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.643976 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:54.644169 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:54.644351 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:54.644371 21140 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-230158' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158/g' /etc/hosts;
else
echo '127.0.1.1 ha-230158' | sudo tee -a /etc/hosts;
fi
fi
I0804 00:32:54.758999 21140 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:32:54.759031 21140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
I0804 00:32:54.759061 21140 buildroot.go:174] setting up certificates
I0804 00:32:54.759070 21140 provision.go:84] configureAuth start
I0804 00:32:54.759079 21140 main.go:141] libmachine: (ha-230158) Calling .GetMachineName
I0804 00:32:54.759335 21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:32:54.761860 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.762208 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.762254 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.762353 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.764447 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.764735 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.764777 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.764853 21140 provision.go:143] copyHostCerts
I0804 00:32:54.764875 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:32:54.764913 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
I0804 00:32:54.764921 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:32:54.764981 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
I0804 00:32:54.765047 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:32:54.765064 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
I0804 00:32:54.765070 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:32:54.765091 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
I0804 00:32:54.765129 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:32:54.765145 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
I0804 00:32:54.765150 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:32:54.765171 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
I0804 00:32:54.765212 21140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158 san=[127.0.0.1 192.168.39.132 ha-230158 localhost minikube]
I0804 00:32:54.838410 21140 provision.go:177] copyRemoteCerts
I0804 00:32:54.838457 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0804 00:32:54.838481 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.840788 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.841102 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.841131 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.841270 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:54.841468 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.841641 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:54.841763 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:32:54.924469 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0804 00:32:54.924532 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0804 00:32:54.948277 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
I0804 00:32:54.948339 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0804 00:32:54.971889 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0804 00:32:54.971954 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0804 00:32:54.995038 21140 provision.go:87] duration metric: took 235.956813ms to configureAuth
I0804 00:32:54.995085 21140 buildroot.go:189] setting minikube options for container-runtime
I0804 00:32:54.995245 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:32:54.995269 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:54.995535 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:54.998409 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.998785 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:54.998809 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:54.998968 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:54.999136 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.999273 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:54.999394 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:54.999563 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:54.999719 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:54.999730 21140 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0804 00:32:55.107480 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0804 00:32:55.107506 21140 buildroot.go:70] root file system type: tmpfs
I0804 00:32:55.107642 21140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0804 00:32:55.107667 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:55.110196 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:55.110640 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:55.110660 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:55.110846 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:55.111020 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:55.111149 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:55.111265 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:55.111429 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:55.111608 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:55.111668 21140 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0804 00:32:55.228500 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0804 00:32:55.228527 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:55.231052 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:55.231414 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:55.231450 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:55.231603 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:55.231773 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:55.231921 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:55.232099 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:55.232281 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:55.232491 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:55.232517 21140 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0804 00:32:56.991416 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0804 00:32:56.991440 21140 main.go:141] libmachine: Checking connection to Docker...
I0804 00:32:56.991448 21140 main.go:141] libmachine: (ha-230158) Calling .GetURL
I0804 00:32:56.992552 21140 main.go:141] libmachine: (ha-230158) DBG | Using libvirt version 6000000
I0804 00:32:56.994460 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:56.994745 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:56.994773 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:56.994931 21140 main.go:141] libmachine: Docker is up and running!
I0804 00:32:56.994949 21140 main.go:141] libmachine: Reticulating splines...
I0804 00:32:56.994957 21140 client.go:171] duration metric: took 26.050639623s to LocalClient.Create
I0804 00:32:56.994980 21140 start.go:167] duration metric: took 26.050695026s to libmachine.API.Create "ha-230158"
I0804 00:32:56.994992 21140 start.go:293] postStartSetup for "ha-230158" (driver="kvm2")
I0804 00:32:56.995003 21140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0804 00:32:56.995019 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:56.995233 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0804 00:32:56.995259 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:56.997109 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:56.997414 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:56.997444 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:56.997570 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:56.997762 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:56.997937 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:56.998085 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:32:57.080881 21140 ssh_runner.go:195] Run: cat /etc/os-release
I0804 00:32:57.084955 21140 info.go:137] Remote host: Buildroot 2023.02.9
I0804 00:32:57.084974 21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
I0804 00:32:57.085029 21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
I0804 00:32:57.085100 21140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
I0804 00:32:57.085109 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
I0804 00:32:57.085190 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0804 00:32:57.094432 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:32:57.116709 21140 start.go:296] duration metric: took 121.696868ms for postStartSetup
I0804 00:32:57.116750 21140 main.go:141] libmachine: (ha-230158) Calling .GetConfigRaw
I0804 00:32:57.117323 21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:32:57.119831 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.120165 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:57.120212 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.120406 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:32:57.120581 21140 start.go:128] duration metric: took 26.193880441s to createHost
I0804 00:32:57.120604 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:57.123098 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.123407 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:57.123430 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.123566 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:57.123742 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:57.123910 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:57.124071 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:57.124217 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:32:57.124377 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0804 00:32:57.124389 21140 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0804 00:32:57.231043 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731577.209951211
I0804 00:32:57.231087 21140 fix.go:216] guest clock: 1722731577.209951211
I0804 00:32:57.231098 21140 fix.go:229] Guest: 2024-08-04 00:32:57.209951211 +0000 UTC Remote: 2024-08-04 00:32:57.12059219 +0000 UTC m=+26.297674596 (delta=89.359021ms)
I0804 00:32:57.231126 21140 fix.go:200] guest clock delta is within tolerance: 89.359021ms
I0804 00:32:57.231133 21140 start.go:83] releasing machines lock for "ha-230158", held for 26.304539197s
I0804 00:32:57.231163 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:57.231428 21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:32:57.234051 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.234508 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:57.234537 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.234705 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:57.235271 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:57.235452 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:32:57.235547 21140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0804 00:32:57.235576 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:57.235666 21140 ssh_runner.go:195] Run: cat /version.json
I0804 00:32:57.235688 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:32:57.238053 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.238116 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.238447 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:57.238471 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.238495 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:32:57.238524 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:32:57.238607 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:57.238719 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:32:57.238788 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:57.238859 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:32:57.238933 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:57.238996 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:32:57.239052 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:32:57.239091 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:32:57.335233 21140 ssh_runner.go:195] Run: systemctl --version
I0804 00:32:57.341045 21140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0804 00:32:57.346598 21140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0804 00:32:57.346655 21140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0804 00:32:57.363478 21140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0804 00:32:57.363507 21140 start.go:495] detecting cgroup driver to use...
I0804 00:32:57.363613 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:32:57.381550 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0804 00:32:57.392232 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0804 00:32:57.402697 21140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0804 00:32:57.402741 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0804 00:32:57.413230 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:32:57.423689 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0804 00:32:57.433882 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:32:57.444123 21140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0804 00:32:57.454604 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0804 00:32:57.464894 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0804 00:32:57.475126 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0804 00:32:57.485555 21140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0804 00:32:57.494566 21140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0804 00:32:57.503704 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:32:57.609739 21140 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0804 00:32:57.634502 21140 start.go:495] detecting cgroup driver to use...
I0804 00:32:57.634579 21140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0804 00:32:57.649722 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:32:57.663075 21140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0804 00:32:57.681027 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:32:57.694388 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:32:57.707836 21140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0804 00:32:57.737257 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:32:57.750381 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:32:57.768686 21140 ssh_runner.go:195] Run: which cri-dockerd
I0804 00:32:57.772533 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0804 00:32:57.781420 21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0804 00:32:57.797649 21140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0804 00:32:57.904330 21140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0804 00:32:58.015103 21140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0804 00:32:58.015241 21140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0804 00:32:58.032390 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:32:58.141269 21140 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:33:00.497232 21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.355928533s)
I0804 00:33:00.497299 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0804 00:33:00.511224 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0804 00:33:00.524642 21140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0804 00:33:00.633804 21140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0804 00:33:00.745087 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:33:00.867368 21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0804 00:33:00.884032 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0804 00:33:00.898059 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:33:01.002422 21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0804 00:33:01.079045 21140 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0804 00:33:01.079118 21140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0804 00:33:01.084853 21140 start.go:563] Will wait 60s for crictl version
I0804 00:33:01.084906 21140 ssh_runner.go:195] Run: which crictl
I0804 00:33:01.090370 21140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0804 00:33:01.127604 21140 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0804 00:33:01.127655 21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0804 00:33:01.154224 21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0804 00:33:01.177225 21140 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
I0804 00:33:01.177331 21140 main.go:141] libmachine: (ha-230158) Calling .GetIP
I0804 00:33:01.180121 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:33:01.180494 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:33:01.180522 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:33:01.180772 21140 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0804 00:33:01.184959 21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0804 00:33:01.198426 21140 kubeadm.go:883] updating cluster {Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0804 00:33:01.198549 21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:33:01.198599 21140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0804 00:33:01.214393 21140 docker.go:685] Got preloaded images:
I0804 00:33:01.214411 21140 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
I0804 00:33:01.214450 21140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0804 00:33:01.225255 21140 ssh_runner.go:195] Run: which lz4
I0804 00:33:01.229351 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0804 00:33:01.229451 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0804 00:33:01.233649 21140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0804 00:33:01.233678 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
I0804 00:33:02.490141 21140 docker.go:649] duration metric: took 1.260715608s to copy over tarball
I0804 00:33:02.490208 21140 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0804 00:33:04.338533 21140 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.848304605s)
I0804 00:33:04.338558 21140 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0804 00:33:04.373097 21140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0804 00:33:04.383582 21140 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
I0804 00:33:04.401245 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:33:04.526806 21140 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:33:08.641884 21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.115040619s)
I0804 00:33:08.642005 21140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0804 00:33:08.659453 21140 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0804 00:33:08.659474 21140 cache_images.go:84] Images are preloaded, skipping loading
I0804 00:33:08.659489 21140 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.30.3 docker true true} ...
I0804 00:33:08.659603 21140 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-230158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0804 00:33:08.659661 21140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0804 00:33:08.716313 21140 cni.go:84] Creating CNI manager for ""
I0804 00:33:08.716341 21140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0804 00:33:08.716355 21140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0804 00:33:08.716382 21140 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-230158 NodeName:ha-230158 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0804 00:33:08.716595 21140 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.132
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-230158"
kubeletExtraArgs:
node-ip: 192.168.39.132
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0804 00:33:08.716626 21140 kube-vip.go:115] generating kube-vip config ...
I0804 00:33:08.716676 21140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0804 00:33:08.732205 21140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0804 00:33:08.732311 21140 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0804 00:33:08.732368 21140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0804 00:33:08.741857 21140 binaries.go:44] Found k8s binaries, skipping transfer
I0804 00:33:08.741916 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0804 00:33:08.751330 21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
I0804 00:33:08.767807 21140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0804 00:33:08.784009 21140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
I0804 00:33:08.800055 21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
I0804 00:33:08.816343 21140 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0804 00:33:08.820140 21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0804 00:33:08.831454 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:33:08.935642 21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0804 00:33:08.952890 21140 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158 for IP: 192.168.39.132
I0804 00:33:08.952912 21140 certs.go:194] generating shared ca certs ...
I0804 00:33:08.952930 21140 certs.go:226] acquiring lock for ca certs: {Name:mkffa482a260ec35b4e7e61a9f84c11349615c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:08.953076 21140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key
I0804 00:33:08.953143 21140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key
I0804 00:33:08.953157 21140 certs.go:256] generating profile certs ...
I0804 00:33:08.953237 21140 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key
I0804 00:33:08.953254 21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt with IP's: []
I0804 00:33:09.154018 21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt ...
I0804 00:33:09.154047 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt: {Name:mk77c87b09a42f8e8aee2ee64e4eb37962023013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:09.154262 21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key ...
I0804 00:33:09.154278 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key: {Name:mkd98ec90d89c2dbad3b99fe7050b3894fffdeed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:09.154387 21140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09
I0804 00:33:09.154406 21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.254]
I0804 00:33:09.252772 21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09 ...
I0804 00:33:09.252800 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09: {Name:mk8b5b74784bb5e469752a6b2aa491801d503e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:09.252969 21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09 ...
I0804 00:33:09.252986 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09: {Name:mk7bee344db6d519ff8e4e621b3b58f319578c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:09.253087 21140 certs.go:381] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.5e009b09 -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt
I0804 00:33:09.253190 21140 certs.go:385] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.5e009b09 -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key
I0804 00:33:09.253281 21140 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key
I0804 00:33:09.253300 21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt with IP's: []
I0804 00:33:09.364891 21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt ...
I0804 00:33:09.364920 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt: {Name:mk3d390eec4d12ccf4bc093c347188787f985e6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:09.365094 21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key ...
I0804 00:33:09.365109 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key: {Name:mkf1993957b9d4c0bc8a39fbf94f6893985f9203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:09.365208 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0804 00:33:09.365232 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0804 00:33:09.365252 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0804 00:33:09.365269 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0804 00:33:09.365289 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0804 00:33:09.365308 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0804 00:33:09.365326 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0804 00:33:09.365344 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0804 00:33:09.365404 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem (1338 bytes)
W0804 00:33:09.365449 21140 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136_empty.pem, impossibly tiny 0 bytes
I0804 00:33:09.365462 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem (1679 bytes)
I0804 00:33:09.365495 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem (1082 bytes)
I0804 00:33:09.365526 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem (1123 bytes)
I0804 00:33:09.365559 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem (1679 bytes)
I0804 00:33:09.365628 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:33:09.365673 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /usr/share/ca-certificates/111362.pem
I0804 00:33:09.365694 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0804 00:33:09.365712 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem -> /usr/share/ca-certificates/11136.pem
I0804 00:33:09.366273 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0804 00:33:09.391653 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0804 00:33:09.414747 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0804 00:33:09.437640 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0804 00:33:09.460325 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0804 00:33:09.483242 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0804 00:33:09.505490 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0804 00:33:09.527863 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0804 00:33:09.550718 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /usr/share/ca-certificates/111362.pem (1708 bytes)
I0804 00:33:09.573234 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0804 00:33:09.595976 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem --> /usr/share/ca-certificates/11136.pem (1338 bytes)
I0804 00:33:09.618743 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0804 00:33:09.635017 21140 ssh_runner.go:195] Run: openssl version
I0804 00:33:09.640910 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111362.pem && ln -fs /usr/share/ca-certificates/111362.pem /etc/ssl/certs/111362.pem"
I0804 00:33:09.651593 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111362.pem
I0804 00:33:09.656397 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 4 00:28 /usr/share/ca-certificates/111362.pem
I0804 00:33:09.656446 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111362.pem
I0804 00:33:09.662327 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111362.pem /etc/ssl/certs/3ec20f2e.0"
I0804 00:33:09.673051 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0804 00:33:09.683683 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0804 00:33:09.688002 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 4 00:21 /usr/share/ca-certificates/minikubeCA.pem
I0804 00:33:09.688043 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0804 00:33:09.693530 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0804 00:33:09.707715 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11136.pem && ln -fs /usr/share/ca-certificates/11136.pem /etc/ssl/certs/11136.pem"
I0804 00:33:09.718660 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11136.pem
I0804 00:33:09.723160 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 4 00:28 /usr/share/ca-certificates/11136.pem
I0804 00:33:09.723214 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11136.pem
I0804 00:33:09.729048 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11136.pem /etc/ssl/certs/51391683.0"
I0804 00:33:09.742606 21140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0804 00:33:09.746906 21140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0804 00:33:09.746953 21140 kubeadm.go:392] StartCluster: {Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0804 00:33:09.747116 21140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0804 00:33:09.775287 21140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0804 00:33:09.787393 21140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0804 00:33:09.797273 21140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0804 00:33:09.807040 21140 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0804 00:33:09.807058 21140 kubeadm.go:157] found existing configuration files:
I0804 00:33:09.807101 21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0804 00:33:09.816230 21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0804 00:33:09.816272 21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0804 00:33:09.825781 21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0804 00:33:09.834834 21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0804 00:33:09.834872 21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0804 00:33:09.844017 21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0804 00:33:09.852963 21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0804 00:33:09.852996 21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0804 00:33:09.862317 21140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0804 00:33:09.871282 21140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0804 00:33:09.871318 21140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0804 00:33:09.880621 21140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0804 00:33:10.099955 21140 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0804 00:33:21.015275 21140 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
I0804 00:33:21.015361 21140 kubeadm.go:310] [preflight] Running pre-flight checks
I0804 00:33:21.015466 21140 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0804 00:33:21.015598 21140 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0804 00:33:21.015702 21140 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0804 00:33:21.015791 21140 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0804 00:33:21.017397 21140 out.go:204] - Generating certificates and keys ...
I0804 00:33:21.017476 21140 kubeadm.go:310] [certs] Using existing ca certificate authority
I0804 00:33:21.017534 21140 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0804 00:33:21.017642 21140 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0804 00:33:21.017733 21140 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0804 00:33:21.017817 21140 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0804 00:33:21.017887 21140 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0804 00:33:21.017965 21140 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0804 00:33:21.018249 21140 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-230158 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
I0804 00:33:21.018339 21140 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0804 00:33:21.018518 21140 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-230158 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
I0804 00:33:21.018618 21140 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0804 00:33:21.018708 21140 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0804 00:33:21.018774 21140 kubeadm.go:310] [certs] Generating "sa" key and public key
I0804 00:33:21.019023 21140 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0804 00:33:21.019105 21140 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0804 00:33:21.019170 21140 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0804 00:33:21.019243 21140 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0804 00:33:21.019335 21140 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0804 00:33:21.019418 21140 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0804 00:33:21.019546 21140 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0804 00:33:21.019651 21140 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0804 00:33:21.020981 21140 out.go:204] - Booting up control plane ...
I0804 00:33:21.021058 21140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0804 00:33:21.021135 21140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0804 00:33:21.021222 21140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0804 00:33:21.021344 21140 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0804 00:33:21.021481 21140 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0804 00:33:21.021526 21140 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0804 00:33:21.021709 21140 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0804 00:33:21.021782 21140 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0804 00:33:21.021861 21140 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.629998ms
I0804 00:33:21.021954 21140 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0804 00:33:21.022015 21140 kubeadm.go:310] [api-check] The API server is healthy after 6.128513733s
I0804 00:33:21.022108 21140 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0804 00:33:21.022225 21140 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0804 00:33:21.022305 21140 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0804 00:33:21.022485 21140 kubeadm.go:310] [mark-control-plane] Marking the node ha-230158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0804 00:33:21.022550 21140 kubeadm.go:310] [bootstrap-token] Using token: xdcwsg.p04udedd0rn0a6qg
I0804 00:33:21.023876 21140 out.go:204] - Configuring RBAC rules ...
I0804 00:33:21.023967 21140 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0804 00:33:21.024071 21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0804 00:33:21.024234 21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0804 00:33:21.024358 21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0804 00:33:21.024461 21140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0804 00:33:21.024544 21140 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0804 00:33:21.024655 21140 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0804 00:33:21.024695 21140 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0804 00:33:21.024774 21140 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0804 00:33:21.024784 21140 kubeadm.go:310]
I0804 00:33:21.024832 21140 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0804 00:33:21.024838 21140 kubeadm.go:310]
I0804 00:33:21.024916 21140 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0804 00:33:21.024930 21140 kubeadm.go:310]
I0804 00:33:21.024972 21140 kubeadm.go:310] mkdir -p $HOME/.kube
I0804 00:33:21.025023 21140 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0804 00:33:21.025076 21140 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0804 00:33:21.025091 21140 kubeadm.go:310]
I0804 00:33:21.025147 21140 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0804 00:33:21.025155 21140 kubeadm.go:310]
I0804 00:33:21.025215 21140 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0804 00:33:21.025222 21140 kubeadm.go:310]
I0804 00:33:21.025296 21140 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0804 00:33:21.025361 21140 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0804 00:33:21.025417 21140 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0804 00:33:21.025423 21140 kubeadm.go:310]
I0804 00:33:21.025497 21140 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0804 00:33:21.025568 21140 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0804 00:33:21.025579 21140 kubeadm.go:310]
I0804 00:33:21.025654 21140 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdcwsg.p04udedd0rn0a6qg \
I0804 00:33:21.025762 21140 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 \
I0804 00:33:21.025785 21140 kubeadm.go:310] --control-plane
I0804 00:33:21.025792 21140 kubeadm.go:310]
I0804 00:33:21.025868 21140 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0804 00:33:21.025875 21140 kubeadm.go:310]
I0804 00:33:21.025940 21140 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdcwsg.p04udedd0rn0a6qg \
I0804 00:33:21.026039 21140 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0
I0804 00:33:21.026049 21140 cni.go:84] Creating CNI manager for ""
I0804 00:33:21.026055 21140 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0804 00:33:21.027626 21140 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0804 00:33:21.028874 21140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0804 00:33:21.034481 21140 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
I0804 00:33:21.034495 21140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0804 00:33:21.054487 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0804 00:33:21.377297 21140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0804 00:33:21.377369 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:21.377421 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-230158 minikube.k8s.io/updated_at=2024_08_04T00_33_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-230158 minikube.k8s.io/primary=true
I0804 00:33:21.392409 21140 ops.go:34] apiserver oom_adj: -16
I0804 00:33:21.500779 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:22.001370 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:22.501148 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:23.001450 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:23.500923 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:24.001430 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:24.500937 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:25.001823 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:25.500942 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:26.001281 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:26.501419 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:27.001535 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:27.500922 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:28.001367 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:28.501606 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:29.001608 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:29.501097 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:30.001320 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:30.501777 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:31.001829 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:31.500852 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:32.000906 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:32.501555 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:33.001108 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:33.500822 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0804 00:33:33.599996 21140 kubeadm.go:1113] duration metric: took 12.222679799s to wait for elevateKubeSystemPrivileges
I0804 00:33:33.600032 21140 kubeadm.go:394] duration metric: took 23.853080946s to StartCluster
I0804 00:33:33.600052 21140 settings.go:142] acquiring lock: {Name:mk93b1d9065d26901985574a9ad74d7ec3be884d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:33.600124 21140 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19364-3947/kubeconfig
I0804 00:33:33.601002 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/kubeconfig: {Name:mk8868e58184f812ddd7933d7e896763e01aff49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:33:33.601248 21140 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:33:33.601268 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0804 00:33:33.601277 21140 start.go:241] waiting for startup goroutines ...
I0804 00:33:33.601311 21140 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0804 00:33:33.601370 21140 addons.go:69] Setting storage-provisioner=true in profile "ha-230158"
I0804 00:33:33.601386 21140 addons.go:69] Setting default-storageclass=true in profile "ha-230158"
I0804 00:33:33.601403 21140 addons.go:234] Setting addon storage-provisioner=true in "ha-230158"
I0804 00:33:33.601423 21140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-230158"
I0804 00:33:33.601446 21140 host.go:66] Checking if "ha-230158" exists ...
I0804 00:33:33.601526 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:33:33.601761 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:33:33.601797 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:33:33.601853 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:33:33.601892 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:33:33.617179 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
I0804 00:33:33.617179 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
I0804 00:33:33.617665 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:33:33.617812 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:33:33.618191 21140 main.go:141] libmachine: Using API Version 1
I0804 00:33:33.618209 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:33:33.618351 21140 main.go:141] libmachine: Using API Version 1
I0804 00:33:33.618372 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:33:33.618612 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:33:33.618671 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:33:33.618836 21140 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:33:33.619191 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:33:33.619243 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:33:33.621018 21140 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19364-3947/kubeconfig
I0804 00:33:33.621359 21140 kapi.go:59] client config for ha-230158: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key", CAFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0804 00:33:33.621903 21140 cert_rotation.go:137] Starting client certificate rotation controller
I0804 00:33:33.622134 21140 addons.go:234] Setting addon default-storageclass=true in "ha-230158"
I0804 00:33:33.622173 21140 host.go:66] Checking if "ha-230158" exists ...
I0804 00:33:33.622560 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:33:33.622603 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:33:33.634444 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
I0804 00:33:33.634887 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:33:33.635446 21140 main.go:141] libmachine: Using API Version 1
I0804 00:33:33.635474 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:33:33.635799 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:33:33.635966 21140 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:33:33.637531 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:33:33.637685 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33185
I0804 00:33:33.638061 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:33:33.638645 21140 main.go:141] libmachine: Using API Version 1
I0804 00:33:33.638670 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:33:33.639061 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:33:33.639216 21140 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0804 00:33:33.639579 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:33:33.639624 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:33:33.640364 21140 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0804 00:33:33.640378 21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0804 00:33:33.640390 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:33:33.642791 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:33:33.643109 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:33:33.643135 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:33:33.643250 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:33:33.643417 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:33:33.643550 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:33:33.643694 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:33:33.658077 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
I0804 00:33:33.658469 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:33:33.659008 21140 main.go:141] libmachine: Using API Version 1
I0804 00:33:33.659033 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:33:33.659357 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:33:33.659586 21140 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:33:33.661196 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:33:33.661413 21140 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0804 00:33:33.661429 21140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0804 00:33:33.661445 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:33:33.664060 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:33:33.664559 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:33:33.664584 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:33:33.664668 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:33:33.664853 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:33:33.664989 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:33:33.665122 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:33:33.775943 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0804 00:33:33.788753 21140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0804 00:33:33.844831 21140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0804 00:33:34.311857 21140 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0804 00:33:34.363949 21140 main.go:141] libmachine: Making call to close driver server
I0804 00:33:34.363978 21140 main.go:141] libmachine: Making call to close driver server
I0804 00:33:34.363987 21140 main.go:141] libmachine: (ha-230158) Calling .Close
I0804 00:33:34.363992 21140 main.go:141] libmachine: (ha-230158) Calling .Close
I0804 00:33:34.364299 21140 main.go:141] libmachine: (ha-230158) DBG | Closing plugin on server side
I0804 00:33:34.364322 21140 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:33:34.364327 21140 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:33:34.364332 21140 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:33:34.364336 21140 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:33:34.364345 21140 main.go:141] libmachine: Making call to close driver server
I0804 00:33:34.364349 21140 main.go:141] libmachine: Making call to close driver server
I0804 00:33:34.364353 21140 main.go:141] libmachine: (ha-230158) Calling .Close
I0804 00:33:34.364368 21140 main.go:141] libmachine: (ha-230158) Calling .Close
I0804 00:33:34.364590 21140 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:33:34.364605 21140 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:33:34.364643 21140 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:33:34.364654 21140 main.go:141] libmachine: (ha-230158) DBG | Closing plugin on server side
I0804 00:33:34.364663 21140 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:33:34.364795 21140 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
I0804 00:33:34.364810 21140 round_trippers.go:469] Request Headers:
I0804 00:33:34.364822 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:33:34.364832 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:33:34.374805 21140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0804 00:33:34.375382 21140 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0804 00:33:34.375397 21140 round_trippers.go:469] Request Headers:
I0804 00:33:34.375407 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:33:34.375413 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:33:34.375417 21140 round_trippers.go:473] Content-Type: application/json
I0804 00:33:34.377733 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:33:34.377876 21140 main.go:141] libmachine: Making call to close driver server
I0804 00:33:34.377889 21140 main.go:141] libmachine: (ha-230158) Calling .Close
I0804 00:33:34.378104 21140 main.go:141] libmachine: Successfully made call to close driver server
I0804 00:33:34.378122 21140 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 00:33:34.379597 21140 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0804 00:33:34.380748 21140 addons.go:510] duration metric: took 779.456208ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0804 00:33:34.380772 21140 start.go:246] waiting for cluster config update ...
I0804 00:33:34.380781 21140 start.go:255] writing updated cluster config ...
I0804 00:33:34.382225 21140 out.go:177]
I0804 00:33:34.383357 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:33:34.383439 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:33:34.384860 21140 out.go:177] * Starting "ha-230158-m02" control-plane node in "ha-230158" cluster
I0804 00:33:34.385928 21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:33:34.385946 21140 cache.go:56] Caching tarball of preloaded images
I0804 00:33:34.386031 21140 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0804 00:33:34.386044 21140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0804 00:33:34.386118 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:33:34.386311 21140 start.go:360] acquireMachinesLock for ha-230158-m02: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0804 00:33:34.386363 21140 start.go:364] duration metric: took 32.811µs to acquireMachinesLock for "ha-230158-m02"
I0804 00:33:34.386387 21140 start.go:93] Provisioning new machine with config: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:33:34.386466 21140 start.go:125] createHost starting for "m02" (driver="kvm2")
I0804 00:33:34.387864 21140 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0804 00:33:34.387949 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:33:34.387988 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:33:34.401959 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
I0804 00:33:34.402313 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:33:34.402735 21140 main.go:141] libmachine: Using API Version 1
I0804 00:33:34.402757 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:33:34.403072 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:33:34.403260 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:33:34.403388 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:33:34.403545 21140 start.go:159] libmachine.API.Create for "ha-230158" (driver="kvm2")
I0804 00:33:34.403572 21140 client.go:168] LocalClient.Create starting
I0804 00:33:34.403605 21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem
I0804 00:33:34.403644 21140 main.go:141] libmachine: Decoding PEM data...
I0804 00:33:34.403667 21140 main.go:141] libmachine: Parsing certificate...
I0804 00:33:34.403740 21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem
I0804 00:33:34.403766 21140 main.go:141] libmachine: Decoding PEM data...
I0804 00:33:34.403784 21140 main.go:141] libmachine: Parsing certificate...
I0804 00:33:34.403810 21140 main.go:141] libmachine: Running pre-create checks...
I0804 00:33:34.403821 21140 main.go:141] libmachine: (ha-230158-m02) Calling .PreCreateCheck
I0804 00:33:34.403961 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
I0804 00:33:34.405677 21140 main.go:141] libmachine: Creating machine...
I0804 00:33:34.405699 21140 main.go:141] libmachine: (ha-230158-m02) Calling .Create
I0804 00:33:34.405841 21140 main.go:141] libmachine: (ha-230158-m02) Creating KVM machine...
I0804 00:33:34.407026 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found existing default KVM network
I0804 00:33:34.407168 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found existing private KVM network mk-ha-230158
I0804 00:33:34.407306 21140 main.go:141] libmachine: (ha-230158-m02) Setting up store path in /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02 ...
I0804 00:33:34.407329 21140 main.go:141] libmachine: (ha-230158-m02) Building disk image from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
I0804 00:33:34.407367 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.407296 21577 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:33:34.407443 21140 main.go:141] libmachine: (ha-230158-m02) Downloading /home/jenkins/minikube-integration/19364-3947/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
I0804 00:33:34.631738 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.631614 21577 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa...
I0804 00:33:34.980781 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.980506 21577 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/ha-230158-m02.rawdisk...
I0804 00:33:34.980819 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Writing magic tar header
I0804 00:33:34.980837 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Writing SSH key tar header
I0804 00:33:34.981050 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:34.980961 21577 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02 ...
I0804 00:33:34.981304 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02
I0804 00:33:34.981324 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines
I0804 00:33:34.981335 21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02 (perms=drwx------)
I0804 00:33:34.981346 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:33:34.981361 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947
I0804 00:33:34.981492 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0804 00:33:34.981515 21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines (perms=drwxr-xr-x)
I0804 00:33:34.981532 21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube (perms=drwxr-xr-x)
I0804 00:33:34.981547 21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947 (perms=drwxrwxr-x)
I0804 00:33:34.981567 21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0804 00:33:34.981581 21140 main.go:141] libmachine: (ha-230158-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0804 00:33:34.981593 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home/jenkins
I0804 00:33:34.981608 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Checking permissions on dir: /home
I0804 00:33:34.981620 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Skipping /home - not owner
I0804 00:33:34.981647 21140 main.go:141] libmachine: (ha-230158-m02) Creating domain...
I0804 00:33:34.982529 21140 main.go:141] libmachine: (ha-230158-m02) define libvirt domain using xml:
I0804 00:33:34.982547 21140 main.go:141] libmachine: (ha-230158-m02) <domain type='kvm'>
I0804 00:33:34.982578 21140 main.go:141] libmachine: (ha-230158-m02) <name>ha-230158-m02</name>
I0804 00:33:34.982598 21140 main.go:141] libmachine: (ha-230158-m02) <memory unit='MiB'>2200</memory>
I0804 00:33:34.982608 21140 main.go:141] libmachine: (ha-230158-m02) <vcpu>2</vcpu>
I0804 00:33:34.982615 21140 main.go:141] libmachine: (ha-230158-m02) <features>
I0804 00:33:34.982623 21140 main.go:141] libmachine: (ha-230158-m02) <acpi/>
I0804 00:33:34.982633 21140 main.go:141] libmachine: (ha-230158-m02) <apic/>
I0804 00:33:34.982639 21140 main.go:141] libmachine: (ha-230158-m02) <pae/>
I0804 00:33:34.982645 21140 main.go:141] libmachine: (ha-230158-m02)
I0804 00:33:34.982653 21140 main.go:141] libmachine: (ha-230158-m02) </features>
I0804 00:33:34.982661 21140 main.go:141] libmachine: (ha-230158-m02) <cpu mode='host-passthrough'>
I0804 00:33:34.982670 21140 main.go:141] libmachine: (ha-230158-m02)
I0804 00:33:34.982676 21140 main.go:141] libmachine: (ha-230158-m02) </cpu>
I0804 00:33:34.982686 21140 main.go:141] libmachine: (ha-230158-m02) <os>
I0804 00:33:34.982698 21140 main.go:141] libmachine: (ha-230158-m02) <type>hvm</type>
I0804 00:33:34.982710 21140 main.go:141] libmachine: (ha-230158-m02) <boot dev='cdrom'/>
I0804 00:33:34.982720 21140 main.go:141] libmachine: (ha-230158-m02) <boot dev='hd'/>
I0804 00:33:34.982737 21140 main.go:141] libmachine: (ha-230158-m02) <bootmenu enable='no'/>
I0804 00:33:34.982771 21140 main.go:141] libmachine: (ha-230158-m02) </os>
I0804 00:33:34.982784 21140 main.go:141] libmachine: (ha-230158-m02) <devices>
I0804 00:33:34.982795 21140 main.go:141] libmachine: (ha-230158-m02) <disk type='file' device='cdrom'>
I0804 00:33:34.982811 21140 main.go:141] libmachine: (ha-230158-m02) <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/boot2docker.iso'/>
I0804 00:33:34.982823 21140 main.go:141] libmachine: (ha-230158-m02) <target dev='hdc' bus='scsi'/>
I0804 00:33:34.982835 21140 main.go:141] libmachine: (ha-230158-m02) <readonly/>
I0804 00:33:34.982846 21140 main.go:141] libmachine: (ha-230158-m02) </disk>
I0804 00:33:34.982860 21140 main.go:141] libmachine: (ha-230158-m02) <disk type='file' device='disk'>
I0804 00:33:34.982872 21140 main.go:141] libmachine: (ha-230158-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0804 00:33:34.982888 21140 main.go:141] libmachine: (ha-230158-m02) <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/ha-230158-m02.rawdisk'/>
I0804 00:33:34.982898 21140 main.go:141] libmachine: (ha-230158-m02) <target dev='hda' bus='virtio'/>
I0804 00:33:34.982907 21140 main.go:141] libmachine: (ha-230158-m02) </disk>
I0804 00:33:34.982922 21140 main.go:141] libmachine: (ha-230158-m02) <interface type='network'>
I0804 00:33:34.982941 21140 main.go:141] libmachine: (ha-230158-m02) <source network='mk-ha-230158'/>
I0804 00:33:34.982953 21140 main.go:141] libmachine: (ha-230158-m02) <model type='virtio'/>
I0804 00:33:34.982965 21140 main.go:141] libmachine: (ha-230158-m02) </interface>
I0804 00:33:34.982974 21140 main.go:141] libmachine: (ha-230158-m02) <interface type='network'>
I0804 00:33:34.982983 21140 main.go:141] libmachine: (ha-230158-m02) <source network='default'/>
I0804 00:33:34.982997 21140 main.go:141] libmachine: (ha-230158-m02) <model type='virtio'/>
I0804 00:33:34.983008 21140 main.go:141] libmachine: (ha-230158-m02) </interface>
I0804 00:33:34.983018 21140 main.go:141] libmachine: (ha-230158-m02) <serial type='pty'>
I0804 00:33:34.983027 21140 main.go:141] libmachine: (ha-230158-m02) <target port='0'/>
I0804 00:33:34.983038 21140 main.go:141] libmachine: (ha-230158-m02) </serial>
I0804 00:33:34.983046 21140 main.go:141] libmachine: (ha-230158-m02) <console type='pty'>
I0804 00:33:34.983057 21140 main.go:141] libmachine: (ha-230158-m02) <target type='serial' port='0'/>
I0804 00:33:34.983076 21140 main.go:141] libmachine: (ha-230158-m02) </console>
I0804 00:33:34.983091 21140 main.go:141] libmachine: (ha-230158-m02) <rng model='virtio'>
I0804 00:33:34.983105 21140 main.go:141] libmachine: (ha-230158-m02) <backend model='random'>/dev/random</backend>
I0804 00:33:34.983113 21140 main.go:141] libmachine: (ha-230158-m02) </rng>
I0804 00:33:34.983120 21140 main.go:141] libmachine: (ha-230158-m02)
I0804 00:33:34.983129 21140 main.go:141] libmachine: (ha-230158-m02)
I0804 00:33:34.983138 21140 main.go:141] libmachine: (ha-230158-m02) </devices>
I0804 00:33:34.983148 21140 main.go:141] libmachine: (ha-230158-m02) </domain>
I0804 00:33:34.983158 21140 main.go:141] libmachine: (ha-230158-m02)
I0804 00:33:34.989079 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:cb:b3:b0 in network default
I0804 00:33:34.989578 21140 main.go:141] libmachine: (ha-230158-m02) Ensuring networks are active...
I0804 00:33:34.989599 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:34.990268 21140 main.go:141] libmachine: (ha-230158-m02) Ensuring network default is active
I0804 00:33:34.990644 21140 main.go:141] libmachine: (ha-230158-m02) Ensuring network mk-ha-230158 is active
I0804 00:33:34.991147 21140 main.go:141] libmachine: (ha-230158-m02) Getting domain xml...
I0804 00:33:34.991882 21140 main.go:141] libmachine: (ha-230158-m02) Creating domain...
I0804 00:33:36.236143 21140 main.go:141] libmachine: (ha-230158-m02) Waiting to get IP...
I0804 00:33:36.236924 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:36.237320 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:36.237365 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:36.237316 21577 retry.go:31] will retry after 269.343087ms: waiting for machine to come up
I0804 00:33:36.508842 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:36.509404 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:36.509434 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:36.509353 21577 retry.go:31] will retry after 320.354ms: waiting for machine to come up
I0804 00:33:36.830933 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:36.831384 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:36.831405 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:36.831339 21577 retry.go:31] will retry after 388.826244ms: waiting for machine to come up
I0804 00:33:37.221810 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:37.222296 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:37.222324 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:37.222246 21577 retry.go:31] will retry after 438.566018ms: waiting for machine to come up
I0804 00:33:37.662559 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:37.662923 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:37.662950 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:37.662888 21577 retry.go:31] will retry after 720.487951ms: waiting for machine to come up
I0804 00:33:38.384849 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:38.385274 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:38.385296 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:38.385220 21577 retry.go:31] will retry after 780.198189ms: waiting for machine to come up
I0804 00:33:39.166800 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:39.167189 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:39.167217 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:39.167150 21577 retry.go:31] will retry after 1.085150437s: waiting for machine to come up
I0804 00:33:40.253366 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:40.253781 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:40.253804 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:40.253737 21577 retry.go:31] will retry after 1.077284779s: waiting for machine to come up
I0804 00:33:41.332446 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:41.332911 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:41.332940 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:41.332880 21577 retry.go:31] will retry after 1.445435502s: waiting for machine to come up
I0804 00:33:42.780433 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:42.780972 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:42.780996 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:42.780922 21577 retry.go:31] will retry after 2.049802174s: waiting for machine to come up
I0804 00:33:44.832833 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:44.833350 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:44.833376 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:44.833310 21577 retry.go:31] will retry after 2.47727833s: waiting for machine to come up
I0804 00:33:47.313965 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:47.314559 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:47.314586 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:47.314517 21577 retry.go:31] will retry after 2.252609164s: waiting for machine to come up
I0804 00:33:49.568155 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:49.568430 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:49.568451 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:49.568403 21577 retry.go:31] will retry after 3.504934561s: waiting for machine to come up
I0804 00:33:53.075350 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:53.075829 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find current IP address of domain ha-230158-m02 in network mk-ha-230158
I0804 00:33:53.075873 21140 main.go:141] libmachine: (ha-230158-m02) DBG | I0804 00:33:53.075769 21577 retry.go:31] will retry after 3.894784936s: waiting for machine to come up
I0804 00:33:56.974127 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:56.974614 21140 main.go:141] libmachine: (ha-230158-m02) Found IP for machine: 192.168.39.188
I0804 00:33:56.974638 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has current primary IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:56.974645 21140 main.go:141] libmachine: (ha-230158-m02) Reserving static IP address...
I0804 00:33:56.975086 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find host DHCP lease matching {name: "ha-230158-m02", mac: "52:54:00:18:6b:a7", ip: "192.168.39.188"} in network mk-ha-230158
I0804 00:33:57.044305 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
I0804 00:33:57.044336 21140 main.go:141] libmachine: (ha-230158-m02) Reserved static IP address: 192.168.39.188
I0804 00:33:57.044373 21140 main.go:141] libmachine: (ha-230158-m02) Waiting for SSH to be available...
I0804 00:33:57.046724 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:33:57.047034 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158
I0804 00:33:57.047057 21140 main.go:141] libmachine: (ha-230158-m02) DBG | unable to find defined IP address of network mk-ha-230158 interface with MAC address 52:54:00:18:6b:a7
I0804 00:33:57.047178 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
I0804 00:33:57.047200 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
I0804 00:33:57.047262 21140 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:33:57.047282 21140 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
I0804 00:33:57.047299 21140 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
I0804 00:33:57.050685 21140 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: exit status 255:
I0804 00:33:57.050701 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0804 00:33:57.050709 21140 main.go:141] libmachine: (ha-230158-m02) DBG | command : exit 0
I0804 00:33:57.050718 21140 main.go:141] libmachine: (ha-230158-m02) DBG | err : exit status 255
I0804 00:33:57.050726 21140 main.go:141] libmachine: (ha-230158-m02) DBG | output :
I0804 00:34:00.050918 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Getting to WaitForSSH function...
I0804 00:34:00.053466 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.053948 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.053978 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.054097 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH client type: external
I0804 00:34:00.054121 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa (-rw-------)
I0804 00:34:00.054149 21140 main.go:141] libmachine: (ha-230158-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:34:00.054166 21140 main.go:141] libmachine: (ha-230158-m02) DBG | About to run SSH command:
I0804 00:34:00.054181 21140 main.go:141] libmachine: (ha-230158-m02) DBG | exit 0
I0804 00:34:00.182691 21140 main.go:141] libmachine: (ha-230158-m02) DBG | SSH cmd err, output: <nil>:
I0804 00:34:00.182940 21140 main.go:141] libmachine: (ha-230158-m02) KVM machine creation complete!
I0804 00:34:00.183237 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
I0804 00:34:00.183772 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:00.183934 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:00.184119 21140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0804 00:34:00.184135 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetState
I0804 00:34:00.185402 21140 main.go:141] libmachine: Detecting operating system of created instance...
I0804 00:34:00.185417 21140 main.go:141] libmachine: Waiting for SSH to be available...
I0804 00:34:00.185422 21140 main.go:141] libmachine: Getting to WaitForSSH function...
I0804 00:34:00.185427 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.187754 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.188163 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.188187 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.188355 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:00.188540 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.188694 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.188851 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:00.189011 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:00.189258 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:00.189270 21140 main.go:141] libmachine: About to run SSH command:
exit 0
I0804 00:34:00.297314 21140 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:34:00.297337 21140 main.go:141] libmachine: Detecting the provisioner...
I0804 00:34:00.297347 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.300140 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.300503 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.300548 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.300706 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:00.300893 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.301033 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.301147 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:00.301331 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:00.301509 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:00.301522 21140 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0804 00:34:00.410856 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0804 00:34:00.410918 21140 main.go:141] libmachine: found compatible host: buildroot
I0804 00:34:00.410928 21140 main.go:141] libmachine: Provisioning with buildroot...
I0804 00:34:00.410938 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:34:00.411200 21140 buildroot.go:166] provisioning hostname "ha-230158-m02"
I0804 00:34:00.411222 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:34:00.411396 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.413932 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.414334 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.414361 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.414483 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:00.414639 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.414750 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.414866 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:00.415013 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:00.415182 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:00.415200 21140 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-230158-m02 && echo "ha-230158-m02" | sudo tee /etc/hostname
I0804 00:34:00.540909 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m02
I0804 00:34:00.540938 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.543874 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.544239 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.544284 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.544450 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:00.544648 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.544834 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.544976 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:00.545131 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:00.545314 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:00.545335 21140 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-230158-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-230158-m02' | sudo tee -a /etc/hosts;
fi
fi
I0804 00:34:00.667251 21140 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:34:00.667277 21140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
I0804 00:34:00.667302 21140 buildroot.go:174] setting up certificates
I0804 00:34:00.667311 21140 provision.go:84] configureAuth start
I0804 00:34:00.667320 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetMachineName
I0804 00:34:00.667577 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:34:00.669910 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.670300 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.670323 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.670468 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.672709 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.673007 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.673036 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.673138 21140 provision.go:143] copyHostCerts
I0804 00:34:00.673166 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:34:00.673200 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
I0804 00:34:00.673211 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:34:00.673280 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
I0804 00:34:00.673350 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:34:00.673368 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
I0804 00:34:00.673372 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:34:00.673397 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
I0804 00:34:00.673438 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:34:00.673454 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
I0804 00:34:00.673458 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:34:00.673478 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
I0804 00:34:00.673525 21140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m02 san=[127.0.0.1 192.168.39.188 ha-230158-m02 localhost minikube]
I0804 00:34:00.778280 21140 provision.go:177] copyRemoteCerts
I0804 00:34:00.778327 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0804 00:34:00.778346 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.780655 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.780960 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.780989 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.781148 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:00.781336 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.781476 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:00.781598 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:34:00.868546 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
I0804 00:34:00.868625 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0804 00:34:00.892433 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0804 00:34:00.892507 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0804 00:34:00.915531 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0804 00:34:00.915587 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0804 00:34:00.939019 21140 provision.go:87] duration metric: took 271.698597ms to configureAuth
I0804 00:34:00.939042 21140 buildroot.go:189] setting minikube options for container-runtime
I0804 00:34:00.939230 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:34:00.939254 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:00.939559 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:00.941901 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.942307 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:00.942327 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:00.942459 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:00.942649 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.942819 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:00.942985 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:00.943135 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:00.943305 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:00.943318 21140 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0804 00:34:01.055739 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0804 00:34:01.055766 21140 buildroot.go:70] root file system type: tmpfs
I0804 00:34:01.055918 21140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0804 00:34:01.055942 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:01.058621 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:01.058973 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:01.059001 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:01.059203 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:01.059366 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:01.059560 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:01.059712 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:01.059898 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:01.060107 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:01.060200 21140 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.132"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0804 00:34:01.187920 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.132
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0804 00:34:01.187950 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:01.190605 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:01.190996 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:01.191028 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:01.191200 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:01.191425 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:01.191586 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:01.191762 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:01.191931 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:01.192109 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:01.192133 21140 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0804 00:34:02.973852 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0804 00:34:02.973882 21140 main.go:141] libmachine: Checking connection to Docker...
I0804 00:34:02.973895 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetURL
I0804 00:34:02.975180 21140 main.go:141] libmachine: (ha-230158-m02) DBG | Using libvirt version 6000000
I0804 00:34:02.977545 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:02.977879 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:02.977907 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:02.978011 21140 main.go:141] libmachine: Docker is up and running!
I0804 00:34:02.978027 21140 main.go:141] libmachine: Reticulating splines...
I0804 00:34:02.978035 21140 client.go:171] duration metric: took 28.574452434s to LocalClient.Create
I0804 00:34:02.978058 21140 start.go:167] duration metric: took 28.574514618s to libmachine.API.Create "ha-230158"
I0804 00:34:02.978070 21140 start.go:293] postStartSetup for "ha-230158-m02" (driver="kvm2")
I0804 00:34:02.978078 21140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0804 00:34:02.978101 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:02.978341 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0804 00:34:02.978382 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:02.980444 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:02.980724 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:02.980741 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:02.980855 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:02.981022 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:02.981194 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:02.981362 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:34:03.065731 21140 ssh_runner.go:195] Run: cat /etc/os-release
I0804 00:34:03.070115 21140 info.go:137] Remote host: Buildroot 2023.02.9
I0804 00:34:03.070134 21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
I0804 00:34:03.070199 21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
I0804 00:34:03.070312 21140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
I0804 00:34:03.070326 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
I0804 00:34:03.070430 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0804 00:34:03.080885 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:34:03.103941 21140 start.go:296] duration metric: took 125.859795ms for postStartSetup
I0804 00:34:03.104002 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetConfigRaw
I0804 00:34:03.104596 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:34:03.107330 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.107729 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:03.107756 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.107958 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:34:03.108164 21140 start.go:128] duration metric: took 28.721688077s to createHost
I0804 00:34:03.108189 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:03.110106 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.110474 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:03.110499 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.110753 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:03.110929 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:03.111096 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:03.111208 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:03.111337 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:34:03.111506 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.188 22 <nil> <nil>}
I0804 00:34:03.111516 21140 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0804 00:34:03.222970 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731643.203343116
I0804 00:34:03.222994 21140 fix.go:216] guest clock: 1722731643.203343116
I0804 00:34:03.223005 21140 fix.go:229] Guest: 2024-08-04 00:34:03.203343116 +0000 UTC Remote: 2024-08-04 00:34:03.108175533 +0000 UTC m=+92.285257944 (delta=95.167583ms)
I0804 00:34:03.223029 21140 fix.go:200] guest clock delta is within tolerance: 95.167583ms
I0804 00:34:03.223037 21140 start.go:83] releasing machines lock for "ha-230158-m02", held for 28.836660328s
I0804 00:34:03.223063 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:03.223345 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:34:03.225993 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.226329 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:03.226351 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.228712 21140 out.go:177] * Found network options:
I0804 00:34:03.230042 21140 out.go:177] - NO_PROXY=192.168.39.132
W0804 00:34:03.231182 21140 proxy.go:119] fail to check proxy env: Error ip not in block
I0804 00:34:03.231221 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:03.231663 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:03.231834 21140 main.go:141] libmachine: (ha-230158-m02) Calling .DriverName
I0804 00:34:03.231905 21140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0804 00:34:03.231944 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
W0804 00:34:03.232042 21140 proxy.go:119] fail to check proxy env: Error ip not in block
I0804 00:34:03.232120 21140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0804 00:34:03.232140 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHHostname
I0804 00:34:03.234608 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.234862 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.234991 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:03.235018 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.235123 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:03.235302 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:03.235311 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:03.235330 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:03.235524 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:03.235544 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHPort
I0804 00:34:03.235694 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHKeyPath
I0804 00:34:03.235692 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
I0804 00:34:03.235836 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetSSHUsername
I0804 00:34:03.235962 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m02/id_rsa Username:docker}
W0804 00:34:03.316348 21140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0804 00:34:03.316420 21140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0804 00:34:03.338803 21140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0804 00:34:03.338828 21140 start.go:495] detecting cgroup driver to use...
I0804 00:34:03.338936 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:34:03.358068 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0804 00:34:03.368777 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0804 00:34:03.379245 21140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0804 00:34:03.379303 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0804 00:34:03.389867 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:34:03.400270 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0804 00:34:03.411270 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:34:03.421718 21140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0804 00:34:03.432972 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0804 00:34:03.443789 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0804 00:34:03.454923 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0804 00:34:03.465482 21140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0804 00:34:03.475350 21140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0804 00:34:03.485285 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:34:03.590505 21140 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0804 00:34:03.615665 21140 start.go:495] detecting cgroup driver to use...
I0804 00:34:03.615750 21140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0804 00:34:03.631563 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:34:03.647428 21140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0804 00:34:03.663904 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:34:03.677259 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:34:03.689907 21140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0804 00:34:03.721179 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:34:03.735231 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:34:03.753269 21140 ssh_runner.go:195] Run: which cri-dockerd
I0804 00:34:03.757177 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0804 00:34:03.767005 21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0804 00:34:03.783229 21140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0804 00:34:03.901393 21140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0804 00:34:04.027419 21140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0804 00:34:04.027457 21140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0804 00:34:04.044350 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:34:04.154078 21140 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:34:06.510775 21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.35665692s)
I0804 00:34:06.510853 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0804 00:34:06.524398 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0804 00:34:06.536855 21140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0804 00:34:06.642364 21140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0804 00:34:06.763061 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:34:06.881512 21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0804 00:34:06.899056 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0804 00:34:06.912162 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:34:07.033135 21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0804 00:34:07.111821 21140 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0804 00:34:07.111882 21140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0804 00:34:07.117377 21140 start.go:563] Will wait 60s for crictl version
I0804 00:34:07.117436 21140 ssh_runner.go:195] Run: which crictl
I0804 00:34:07.122834 21140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0804 00:34:07.159702 21140 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0804 00:34:07.159774 21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0804 00:34:07.184991 21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0804 00:34:07.211671 21140 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
I0804 00:34:07.212904 21140 out.go:177] - env NO_PROXY=192.168.39.132
I0804 00:34:07.214403 21140 main.go:141] libmachine: (ha-230158-m02) Calling .GetIP
I0804 00:34:07.217472 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:07.217944 21140 main.go:141] libmachine: (ha-230158-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6b:a7", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:33:49 +0000 UTC Type:0 Mac:52:54:00:18:6b:a7 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-230158-m02 Clientid:01:52:54:00:18:6b:a7}
I0804 00:34:07.217971 21140 main.go:141] libmachine: (ha-230158-m02) DBG | domain ha-230158-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:18:6b:a7 in network mk-ha-230158
I0804 00:34:07.218194 21140 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0804 00:34:07.222220 21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0804 00:34:07.235640 21140 mustload.go:65] Loading cluster: ha-230158
I0804 00:34:07.235853 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:34:07.236199 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:34:07.236242 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:34:07.250786 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
I0804 00:34:07.251342 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:34:07.251781 21140 main.go:141] libmachine: Using API Version 1
I0804 00:34:07.251801 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:34:07.252086 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:34:07.252243 21140 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:34:07.253628 21140 host.go:66] Checking if "ha-230158" exists ...
I0804 00:34:07.253914 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:34:07.253948 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:34:07.267875 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
I0804 00:34:07.268286 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:34:07.268718 21140 main.go:141] libmachine: Using API Version 1
I0804 00:34:07.268736 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:34:07.269035 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:34:07.269319 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:34:07.269532 21140 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158 for IP: 192.168.39.188
I0804 00:34:07.269544 21140 certs.go:194] generating shared ca certs ...
I0804 00:34:07.269559 21140 certs.go:226] acquiring lock for ca certs: {Name:mkffa482a260ec35b4e7e61a9f84c11349615c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:34:07.269670 21140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key
I0804 00:34:07.269708 21140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key
I0804 00:34:07.269717 21140 certs.go:256] generating profile certs ...
I0804 00:34:07.269774 21140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key
I0804 00:34:07.269798 21140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed
I0804 00:34:07.269812 21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.188 192.168.39.254]
I0804 00:34:07.479685 21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed ...
I0804 00:34:07.479713 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed: {Name:mk4942c0828754fe87b4343b4543d452f5279ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:34:07.479872 21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed ...
I0804 00:34:07.479885 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed: {Name:mk7d37b9013df8b64903584b8f3e87686cf52657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:34:07.479961 21140 certs.go:381] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.614132ed -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt
I0804 00:34:07.480095 21140 certs.go:385] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.614132ed -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key
I0804 00:34:07.480217 21140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key
I0804 00:34:07.480230 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0804 00:34:07.480248 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0804 00:34:07.480261 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0804 00:34:07.480274 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0804 00:34:07.480286 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0804 00:34:07.480298 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0804 00:34:07.480310 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0804 00:34:07.480322 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0804 00:34:07.480364 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem (1338 bytes)
W0804 00:34:07.480392 21140 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136_empty.pem, impossibly tiny 0 bytes
I0804 00:34:07.480402 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem (1679 bytes)
I0804 00:34:07.480422 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem (1082 bytes)
I0804 00:34:07.480441 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem (1123 bytes)
I0804 00:34:07.480462 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem (1679 bytes)
I0804 00:34:07.480497 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:34:07.480523 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /usr/share/ca-certificates/111362.pem
I0804 00:34:07.480537 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0804 00:34:07.480549 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem -> /usr/share/ca-certificates/11136.pem
I0804 00:34:07.480578 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:34:07.483540 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:34:07.483941 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:34:07.483967 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:34:07.484158 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:34:07.484383 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:34:07.484570 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:34:07.484734 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:34:07.558603 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0804 00:34:07.563671 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0804 00:34:07.575237 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0804 00:34:07.579302 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0804 00:34:07.590134 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0804 00:34:07.594449 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0804 00:34:07.606285 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0804 00:34:07.610012 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I0804 00:34:07.620077 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0804 00:34:07.624507 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0804 00:34:07.634464 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0804 00:34:07.638732 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0804 00:34:07.651163 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0804 00:34:07.675443 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0804 00:34:07.697641 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0804 00:34:07.720620 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0804 00:34:07.743358 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0804 00:34:07.766199 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0804 00:34:07.789338 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0804 00:34:07.812594 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0804 00:34:07.835867 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /usr/share/ca-certificates/111362.pem (1708 bytes)
I0804 00:34:07.858903 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0804 00:34:07.881640 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem --> /usr/share/ca-certificates/11136.pem (1338 bytes)
I0804 00:34:07.904313 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0804 00:34:07.921094 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0804 00:34:07.937606 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0804 00:34:07.953663 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I0804 00:34:07.970041 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0804 00:34:07.986209 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0804 00:34:08.002865 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0804 00:34:08.021694 21140 ssh_runner.go:195] Run: openssl version
I0804 00:34:08.027587 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0804 00:34:08.038709 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0804 00:34:08.043324 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 4 00:21 /usr/share/ca-certificates/minikubeCA.pem
I0804 00:34:08.043385 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0804 00:34:08.049038 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0804 00:34:08.059481 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11136.pem && ln -fs /usr/share/ca-certificates/11136.pem /etc/ssl/certs/11136.pem"
I0804 00:34:08.070091 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11136.pem
I0804 00:34:08.074494 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 4 00:28 /usr/share/ca-certificates/11136.pem
I0804 00:34:08.074533 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11136.pem
I0804 00:34:08.079883 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11136.pem /etc/ssl/certs/51391683.0"
I0804 00:34:08.090363 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111362.pem && ln -fs /usr/share/ca-certificates/111362.pem /etc/ssl/certs/111362.pem"
I0804 00:34:08.100615 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111362.pem
I0804 00:34:08.104909 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 4 00:28 /usr/share/ca-certificates/111362.pem
I0804 00:34:08.104945 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111362.pem
I0804 00:34:08.110433 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111362.pem /etc/ssl/certs/3ec20f2e.0"
I0804 00:34:08.120574 21140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0804 00:34:08.124582 21140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0804 00:34:08.124647 21140 kubeadm.go:934] updating node {m02 192.168.39.188 8443 v1.30.3 docker true true} ...
I0804 00:34:08.124753 21140 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-230158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0804 00:34:08.124780 21140 kube-vip.go:115] generating kube-vip config ...
I0804 00:34:08.124820 21140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0804 00:34:08.139786 21140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0804 00:34:08.139846 21140 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0804 00:34:08.139898 21140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0804 00:34:08.149586 21140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
Initiating transfer...
I0804 00:34:08.149628 21140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
I0804 00:34:08.158592 21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
I0804 00:34:08.158616 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
I0804 00:34:08.158661 21140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet
I0804 00:34:08.158676 21140 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm
I0804 00:34:08.158681 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
I0804 00:34:08.163476 21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
I0804 00:34:08.163498 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
I0804 00:34:10.501737 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
I0804 00:34:10.501822 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
I0804 00:34:10.506769 21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
I0804 00:34:10.506799 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
I0804 00:34:12.405648 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:34:12.420742 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
I0804 00:34:12.420837 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
I0804 00:34:12.425182 21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
I0804 00:34:12.425209 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
I0804 00:34:12.830358 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0804 00:34:12.839608 21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
I0804 00:34:12.856242 21140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0804 00:34:12.872578 21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0804 00:34:12.888266 21140 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0804 00:34:12.891912 21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0804 00:34:12.903392 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:34:13.018858 21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0804 00:34:13.039089 21140 host.go:66] Checking if "ha-230158" exists ...
I0804 00:34:13.039397 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:34:13.039432 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:34:13.053963 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
I0804 00:34:13.054374 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:34:13.054851 21140 main.go:141] libmachine: Using API Version 1
I0804 00:34:13.054873 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:34:13.055241 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:34:13.055427 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:34:13.055575 21140 start.go:317] joinCluster: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0804 00:34:13.055718 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
I0804 00:34:13.055739 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:34:13.058643 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:34:13.059080 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:34:13.059107 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:34:13.059316 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:34:13.059529 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:34:13.059699 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:34:13.060688 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:34:13.242684 21140 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:34:13.242729 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9f5oe8.e28x00q0ngfisul1 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m02 --control-plane --apiserver-advertise-address=192.168.39.188 --apiserver-bind-port=8443"
I0804 00:34:35.004730 21140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9f5oe8.e28x00q0ngfisul1 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m02 --control-plane --apiserver-advertise-address=192.168.39.188 --apiserver-bind-port=8443": (21.761977329s)
I0804 00:34:35.004764 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0804 00:34:35.554008 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-230158-m02 minikube.k8s.io/updated_at=2024_08_04T00_34_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-230158 minikube.k8s.io/primary=false
I0804 00:34:35.691757 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-230158-m02 node-role.kubernetes.io/control-plane:NoSchedule-
I0804 00:34:35.818745 21140 start.go:319] duration metric: took 22.763168526s to joinCluster
I0804 00:34:35.818822 21140 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:34:35.819150 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:34:35.820459 21140 out.go:177] * Verifying Kubernetes components...
I0804 00:34:35.821769 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:34:36.156667 21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0804 00:34:36.198514 21140 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19364-3947/kubeconfig
I0804 00:34:36.198830 21140 kapi.go:59] client config for ha-230158: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key", CAFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0804 00:34:36.198909 21140 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
I0804 00:34:36.199154 21140 node_ready.go:35] waiting up to 6m0s for node "ha-230158-m02" to be "Ready" ...
I0804 00:34:36.199257 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:36.199267 21140 round_trippers.go:469] Request Headers:
I0804 00:34:36.199277 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:36.199282 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:36.226564 21140 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
I0804 00:34:36.700098 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:36.700120 21140 round_trippers.go:469] Request Headers:
I0804 00:34:36.700131 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:36.700138 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:36.711324 21140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0804 00:34:37.200254 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:37.200278 21140 round_trippers.go:469] Request Headers:
I0804 00:34:37.200290 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:37.200298 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:37.204598 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:37.699445 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:37.699467 21140 round_trippers.go:469] Request Headers:
I0804 00:34:37.699482 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:37.699488 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:37.703558 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:38.199900 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:38.199917 21140 round_trippers.go:469] Request Headers:
I0804 00:34:38.199926 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:38.199930 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:38.203357 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:38.204075 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:38.699358 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:38.699381 21140 round_trippers.go:469] Request Headers:
I0804 00:34:38.699388 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:38.699392 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:38.702677 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:39.199615 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:39.199641 21140 round_trippers.go:469] Request Headers:
I0804 00:34:39.199649 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:39.199653 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:39.206263 21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0804 00:34:39.700084 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:39.700108 21140 round_trippers.go:469] Request Headers:
I0804 00:34:39.700116 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:39.700121 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:39.704339 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:40.199338 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:40.199364 21140 round_trippers.go:469] Request Headers:
I0804 00:34:40.199375 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:40.199383 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:40.202609 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:40.699568 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:40.699589 21140 round_trippers.go:469] Request Headers:
I0804 00:34:40.699597 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:40.699600 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:40.702632 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:40.703209 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:41.199424 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:41.199454 21140 round_trippers.go:469] Request Headers:
I0804 00:34:41.199463 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:41.199467 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:41.202612 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:41.699623 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:41.699643 21140 round_trippers.go:469] Request Headers:
I0804 00:34:41.699651 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:41.699656 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:41.702876 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:42.200058 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:42.200077 21140 round_trippers.go:469] Request Headers:
I0804 00:34:42.200085 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:42.200088 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:42.204367 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:42.699492 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:42.699525 21140 round_trippers.go:469] Request Headers:
I0804 00:34:42.699536 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:42.699540 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:42.702416 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:43.200086 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:43.200111 21140 round_trippers.go:469] Request Headers:
I0804 00:34:43.200123 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:43.200129 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:43.204006 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:43.204729 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:43.700249 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:43.700271 21140 round_trippers.go:469] Request Headers:
I0804 00:34:43.700278 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:43.700281 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:43.703414 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:44.199360 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:44.199384 21140 round_trippers.go:469] Request Headers:
I0804 00:34:44.199394 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:44.199399 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:44.202381 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:44.699980 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:44.699999 21140 round_trippers.go:469] Request Headers:
I0804 00:34:44.700007 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:44.700011 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:44.702991 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:45.200017 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:45.200039 21140 round_trippers.go:469] Request Headers:
I0804 00:34:45.200046 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:45.200051 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:45.203656 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:45.700015 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:45.700037 21140 round_trippers.go:469] Request Headers:
I0804 00:34:45.700047 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:45.700052 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:45.702590 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:45.703419 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:46.199304 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:46.199326 21140 round_trippers.go:469] Request Headers:
I0804 00:34:46.199335 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:46.199340 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:46.202195 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:46.699359 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:46.699385 21140 round_trippers.go:469] Request Headers:
I0804 00:34:46.699397 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:46.699403 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:46.702123 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:47.200099 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:47.200121 21140 round_trippers.go:469] Request Headers:
I0804 00:34:47.200127 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:47.200131 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:47.203724 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:47.699403 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:47.699425 21140 round_trippers.go:469] Request Headers:
I0804 00:34:47.699435 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:47.699439 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:47.703523 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:47.704523 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:48.199970 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:48.199991 21140 round_trippers.go:469] Request Headers:
I0804 00:34:48.199998 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:48.200001 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:48.204184 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:48.699350 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:48.699371 21140 round_trippers.go:469] Request Headers:
I0804 00:34:48.699379 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:48.699383 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:48.702332 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:49.199373 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:49.199392 21140 round_trippers.go:469] Request Headers:
I0804 00:34:49.199399 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:49.199404 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:49.202714 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:49.700177 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:49.700199 21140 round_trippers.go:469] Request Headers:
I0804 00:34:49.700207 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:49.700212 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:49.703239 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:50.200362 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:50.200388 21140 round_trippers.go:469] Request Headers:
I0804 00:34:50.200399 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:50.200407 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:50.204353 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:50.205240 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:50.699415 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:50.699443 21140 round_trippers.go:469] Request Headers:
I0804 00:34:50.699451 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:50.699456 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:50.702632 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:51.199336 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:51.199356 21140 round_trippers.go:469] Request Headers:
I0804 00:34:51.199365 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:51.199372 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:51.202414 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:51.699440 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:51.699463 21140 round_trippers.go:469] Request Headers:
I0804 00:34:51.699470 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:51.699474 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:51.702837 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:52.199493 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:52.199515 21140 round_trippers.go:469] Request Headers:
I0804 00:34:52.199522 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:52.199527 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:52.202962 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:52.700340 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:52.700361 21140 round_trippers.go:469] Request Headers:
I0804 00:34:52.700370 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:52.700374 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:52.704175 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:52.705307 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:53.200250 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:53.200273 21140 round_trippers.go:469] Request Headers:
I0804 00:34:53.200282 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:53.200286 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:53.203612 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:53.699935 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:53.699956 21140 round_trippers.go:469] Request Headers:
I0804 00:34:53.699963 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:53.699966 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:53.703682 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:54.199619 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:54.199644 21140 round_trippers.go:469] Request Headers:
I0804 00:34:54.199656 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:54.199662 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:54.202953 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:54.699443 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:54.699466 21140 round_trippers.go:469] Request Headers:
I0804 00:34:54.699474 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:54.699477 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:54.702915 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:55.199839 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:55.199860 21140 round_trippers.go:469] Request Headers:
I0804 00:34:55.199868 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:55.199873 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:55.203990 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:55.204671 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:55.700081 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:55.700106 21140 round_trippers.go:469] Request Headers:
I0804 00:34:55.700118 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:55.700123 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:55.703768 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:56.200350 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:56.200374 21140 round_trippers.go:469] Request Headers:
I0804 00:34:56.200386 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:56.200391 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:56.204003 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:56.700084 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:56.700107 21140 round_trippers.go:469] Request Headers:
I0804 00:34:56.700115 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:56.700119 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:56.703414 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:57.199662 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:57.199686 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.199697 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.199702 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.207529 21140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0804 00:34:57.208233 21140 node_ready.go:53] node "ha-230158-m02" has status "Ready":"False"
I0804 00:34:57.699361 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:57.699387 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.699396 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.699401 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.702114 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.702628 21140 node_ready.go:49] node "ha-230158-m02" has status "Ready":"True"
I0804 00:34:57.702649 21140 node_ready.go:38] duration metric: took 21.503473952s for node "ha-230158-m02" to be "Ready" ...
I0804 00:34:57.702657 21140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0804 00:34:57.702710 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:34:57.702718 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.702725 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.702731 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.707156 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:34:57.713455 21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.713525 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cqbjc
I0804 00:34:57.713531 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.713538 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.713543 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.716204 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.716817 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:57.716832 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.716839 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.716843 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.719114 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.719698 21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:57.719715 21140 pod_ready.go:81] duration metric: took 6.238758ms for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.719726 21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.719778 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xt2gb
I0804 00:34:57.719785 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.719794 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.719800 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.721849 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.722448 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:57.722461 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.722467 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.722470 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.724829 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.725653 21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:57.725669 21140 pod_ready.go:81] duration metric: took 5.935947ms for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.725677 21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.725714 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158
I0804 00:34:57.725722 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.725728 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.725734 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.727968 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.728620 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:57.728638 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.728647 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.728651 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.730852 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.731781 21140 pod_ready.go:92] pod "etcd-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:57.731796 21140 pod_ready.go:81] duration metric: took 6.114243ms for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.731803 21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.731848 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m02
I0804 00:34:57.731857 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.731867 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.731876 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.734086 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.734660 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:57.734675 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.734684 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.734695 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.736819 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:57.737268 21140 pod_ready.go:92] pod "etcd-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:57.737286 21140 pod_ready.go:81] duration metric: took 5.477087ms for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.737303 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:34:57.899675 21140 request.go:629] Waited for 162.319339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
I0804 00:34:57.899773 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
I0804 00:34:57.899784 21140 round_trippers.go:469] Request Headers:
I0804 00:34:57.899796 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:57.899803 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:57.903095 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:58.099982 21140 request.go:629] Waited for 196.23398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:58.100033 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:58.100038 21140 round_trippers.go:469] Request Headers:
I0804 00:34:58.100050 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:58.100055 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:58.103327 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:58.103770 21140 pod_ready.go:92] pod "kube-apiserver-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:58.103785 21140 pod_ready.go:81] duration metric: took 366.474938ms for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:34:58.103794 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:34:58.299984 21140 request.go:629] Waited for 196.13474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
I0804 00:34:58.300050 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
I0804 00:34:58.300055 21140 round_trippers.go:469] Request Headers:
I0804 00:34:58.300063 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:58.300066 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:58.303008 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:58.500054 21140 request.go:629] Waited for 196.35867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:58.500106 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:58.500110 21140 round_trippers.go:469] Request Headers:
I0804 00:34:58.500117 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:58.500122 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:58.503069 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:58.503523 21140 pod_ready.go:92] pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:58.503541 21140 pod_ready.go:81] duration metric: took 399.740623ms for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:34:58.503550 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:34:58.699633 21140 request.go:629] Waited for 195.997904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
I0804 00:34:58.699685 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
I0804 00:34:58.699690 21140 round_trippers.go:469] Request Headers:
I0804 00:34:58.699697 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:58.699702 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:58.703099 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:58.900061 21140 request.go:629] Waited for 196.396916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:58.900138 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:34:58.900150 21140 round_trippers.go:469] Request Headers:
I0804 00:34:58.900162 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:58.900174 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:58.903341 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:58.903879 21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:58.903904 21140 pod_ready.go:81] duration metric: took 400.346324ms for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:34:58.903917 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:34:59.099943 21140 request.go:629] Waited for 195.954598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
I0804 00:34:59.100017 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
I0804 00:34:59.100024 21140 round_trippers.go:469] Request Headers:
I0804 00:34:59.100031 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:59.100035 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:59.103509 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:59.299446 21140 request.go:629] Waited for 195.230977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:59.299526 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:59.299541 21140 round_trippers.go:469] Request Headers:
I0804 00:34:59.299553 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:59.299557 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:59.302558 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:34:59.303339 21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:59.303356 21140 pod_ready.go:81] duration metric: took 399.432866ms for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:34:59.303364 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
I0804 00:34:59.500018 21140 request.go:629] Waited for 196.594484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
I0804 00:34:59.500098 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
I0804 00:34:59.500113 21140 round_trippers.go:469] Request Headers:
I0804 00:34:59.500128 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:59.500140 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:59.503548 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:59.699517 21140 request.go:629] Waited for 195.278381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:59.699567 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:34:59.699572 21140 round_trippers.go:469] Request Headers:
I0804 00:34:59.699579 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:59.699582 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:59.702996 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:34:59.703492 21140 pod_ready.go:92] pod "kube-proxy-8tgp2" in "kube-system" namespace has status "Ready":"True"
I0804 00:34:59.703510 21140 pod_ready.go:81] duration metric: took 400.140483ms for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
I0804 00:34:59.703519 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
I0804 00:34:59.899662 21140 request.go:629] Waited for 196.079238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
I0804 00:34:59.899722 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
I0804 00:34:59.899742 21140 round_trippers.go:469] Request Headers:
I0804 00:34:59.899755 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:34:59.899761 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:34:59.903971 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:35:00.100145 21140 request.go:629] Waited for 195.383817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:35:00.100208 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:35:00.100214 21140 round_trippers.go:469] Request Headers:
I0804 00:35:00.100222 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:00.100227 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:00.103195 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:35:00.103921 21140 pod_ready.go:92] pod "kube-proxy-vdn92" in "kube-system" namespace has status "Ready":"True"
I0804 00:35:00.103941 21140 pod_ready.go:81] duration metric: took 400.4062ms for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
I0804 00:35:00.103950 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:35:00.300134 21140 request.go:629] Waited for 196.118329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
I0804 00:35:00.300211 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
I0804 00:35:00.300217 21140 round_trippers.go:469] Request Headers:
I0804 00:35:00.300224 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:00.300232 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:00.303575 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:00.499708 21140 request.go:629] Waited for 195.37409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:35:00.499783 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:35:00.499788 21140 round_trippers.go:469] Request Headers:
I0804 00:35:00.499796 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:00.499800 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:00.503391 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:00.503811 21140 pod_ready.go:92] pod "kube-scheduler-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:35:00.503826 21140 pod_ready.go:81] duration metric: took 399.870925ms for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:35:00.503837 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:35:00.700071 21140 request.go:629] Waited for 196.180127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
I0804 00:35:00.700123 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
I0804 00:35:00.700128 21140 round_trippers.go:469] Request Headers:
I0804 00:35:00.700141 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:00.700144 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:00.703799 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:00.899942 21140 request.go:629] Waited for 195.429445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:35:00.899994 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:35:00.899999 21140 round_trippers.go:469] Request Headers:
I0804 00:35:00.900006 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:00.900011 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:00.903149 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:00.903698 21140 pod_ready.go:92] pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:35:00.903715 21140 pod_ready.go:81] duration metric: took 399.871252ms for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:35:00.903725 21140 pod_ready.go:38] duration metric: took 3.201056231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0804 00:35:00.903743 21140 api_server.go:52] waiting for apiserver process to appear ...
I0804 00:35:00.903790 21140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:35:00.919660 21140 api_server.go:72] duration metric: took 25.100801381s to wait for apiserver process to appear ...
I0804 00:35:00.919693 21140 api_server.go:88] waiting for apiserver healthz status ...
I0804 00:35:00.919712 21140 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
I0804 00:35:00.927575 21140 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
ok
I0804 00:35:00.927643 21140 round_trippers.go:463] GET https://192.168.39.132:8443/version
I0804 00:35:00.927653 21140 round_trippers.go:469] Request Headers:
I0804 00:35:00.927664 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:00.927670 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:00.929869 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:35:00.930059 21140 api_server.go:141] control plane version: v1.30.3
I0804 00:35:00.930081 21140 api_server.go:131] duration metric: took 10.380541ms to wait for apiserver health ...
I0804 00:35:00.930091 21140 system_pods.go:43] waiting for kube-system pods to appear ...
I0804 00:35:01.099684 21140 request.go:629] Waited for 169.52597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:35:01.099769 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:35:01.099779 21140 round_trippers.go:469] Request Headers:
I0804 00:35:01.099790 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:01.099803 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:01.105320 21140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0804 00:35:01.110892 21140 system_pods.go:59] 17 kube-system pods found
I0804 00:35:01.110925 21140 system_pods.go:61] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
I0804 00:35:01.110932 21140 system_pods.go:61] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
I0804 00:35:01.110938 21140 system_pods.go:61] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
I0804 00:35:01.110943 21140 system_pods.go:61] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
I0804 00:35:01.110947 21140 system_pods.go:61] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
I0804 00:35:01.110956 21140 system_pods.go:61] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
I0804 00:35:01.110961 21140 system_pods.go:61] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
I0804 00:35:01.110967 21140 system_pods.go:61] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
I0804 00:35:01.110972 21140 system_pods.go:61] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
I0804 00:35:01.110977 21140 system_pods.go:61] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
I0804 00:35:01.110983 21140 system_pods.go:61] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
I0804 00:35:01.110987 21140 system_pods.go:61] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
I0804 00:35:01.110990 21140 system_pods.go:61] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
I0804 00:35:01.110993 21140 system_pods.go:61] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
I0804 00:35:01.110997 21140 system_pods.go:61] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
I0804 00:35:01.111000 21140 system_pods.go:61] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
I0804 00:35:01.111003 21140 system_pods.go:61] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
I0804 00:35:01.111009 21140 system_pods.go:74] duration metric: took 180.911846ms to wait for pod list to return data ...
I0804 00:35:01.111018 21140 default_sa.go:34] waiting for default service account to be created ...
I0804 00:35:01.299365 21140 request.go:629] Waited for 188.274972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
I0804 00:35:01.299415 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
I0804 00:35:01.299421 21140 round_trippers.go:469] Request Headers:
I0804 00:35:01.299429 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:01.299435 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:01.302985 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:01.303165 21140 default_sa.go:45] found service account: "default"
I0804 00:35:01.303184 21140 default_sa.go:55] duration metric: took 192.159471ms for default service account to be created ...
I0804 00:35:01.303192 21140 system_pods.go:116] waiting for k8s-apps to be running ...
I0804 00:35:01.499542 21140 request.go:629] Waited for 196.290629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:35:01.499612 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:35:01.499619 21140 round_trippers.go:469] Request Headers:
I0804 00:35:01.499627 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:01.499632 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:01.504649 21140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0804 00:35:01.510115 21140 system_pods.go:86] 17 kube-system pods found
I0804 00:35:01.510141 21140 system_pods.go:89] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
I0804 00:35:01.510151 21140 system_pods.go:89] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
I0804 00:35:01.510156 21140 system_pods.go:89] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
I0804 00:35:01.510161 21140 system_pods.go:89] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
I0804 00:35:01.510168 21140 system_pods.go:89] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
I0804 00:35:01.510173 21140 system_pods.go:89] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
I0804 00:35:01.510192 21140 system_pods.go:89] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
I0804 00:35:01.510199 21140 system_pods.go:89] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
I0804 00:35:01.510207 21140 system_pods.go:89] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
I0804 00:35:01.510212 21140 system_pods.go:89] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
I0804 00:35:01.510218 21140 system_pods.go:89] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
I0804 00:35:01.510222 21140 system_pods.go:89] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
I0804 00:35:01.510228 21140 system_pods.go:89] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
I0804 00:35:01.510245 21140 system_pods.go:89] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
I0804 00:35:01.510254 21140 system_pods.go:89] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
I0804 00:35:01.510259 21140 system_pods.go:89] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
I0804 00:35:01.510266 21140 system_pods.go:89] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
I0804 00:35:01.510274 21140 system_pods.go:126] duration metric: took 207.074596ms to wait for k8s-apps to be running ...
I0804 00:35:01.510286 21140 system_svc.go:44] waiting for kubelet service to be running ....
I0804 00:35:01.510326 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:35:01.527215 21140 system_svc.go:56] duration metric: took 16.92222ms WaitForService to wait for kubelet
I0804 00:35:01.527241 21140 kubeadm.go:582] duration metric: took 25.708386161s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0804 00:35:01.527263 21140 node_conditions.go:102] verifying NodePressure condition ...
I0804 00:35:01.699586 21140 request.go:629] Waited for 172.25436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
I0804 00:35:01.699658 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
I0804 00:35:01.699664 21140 round_trippers.go:469] Request Headers:
I0804 00:35:01.699671 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:01.699676 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:01.703487 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:01.704426 21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0804 00:35:01.704448 21140 node_conditions.go:123] node cpu capacity is 2
I0804 00:35:01.704458 21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0804 00:35:01.704461 21140 node_conditions.go:123] node cpu capacity is 2
I0804 00:35:01.704465 21140 node_conditions.go:105] duration metric: took 177.197702ms to run NodePressure ...
I0804 00:35:01.704478 21140 start.go:241] waiting for startup goroutines ...
I0804 00:35:01.704509 21140 start.go:255] writing updated cluster config ...
I0804 00:35:01.706635 21140 out.go:177]
I0804 00:35:01.708170 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:35:01.708270 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:35:01.709941 21140 out.go:177] * Starting "ha-230158-m03" control-plane node in "ha-230158" cluster
I0804 00:35:01.711379 21140 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0804 00:35:01.711400 21140 cache.go:56] Caching tarball of preloaded images
I0804 00:35:01.711488 21140 preload.go:172] Found /home/jenkins/minikube-integration/19364-3947/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0804 00:35:01.711501 21140 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0804 00:35:01.711588 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:35:01.711745 21140 start.go:360] acquireMachinesLock for ha-230158-m03: {Name:mk3c8b650475b5a29be5f1e49e0345d4de7c1632 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0804 00:35:01.711784 21140 start.go:364] duration metric: took 22.409µs to acquireMachinesLock for "ha-230158-m03"
I0804 00:35:01.711800 21140 start.go:93] Provisioning new machine with config: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:35:01.711916 21140 start.go:125] createHost starting for "m03" (driver="kvm2")
I0804 00:35:01.713379 21140 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0804 00:35:01.713453 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:35:01.713490 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:35:01.728747 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
I0804 00:35:01.729142 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:35:01.729578 21140 main.go:141] libmachine: Using API Version 1
I0804 00:35:01.729600 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:35:01.729919 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:35:01.730104 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
I0804 00:35:01.730287 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:01.730456 21140 start.go:159] libmachine.API.Create for "ha-230158" (driver="kvm2")
I0804 00:35:01.730487 21140 client.go:168] LocalClient.Create starting
I0804 00:35:01.730521 21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem
I0804 00:35:01.730562 21140 main.go:141] libmachine: Decoding PEM data...
I0804 00:35:01.730584 21140 main.go:141] libmachine: Parsing certificate...
I0804 00:35:01.730648 21140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem
I0804 00:35:01.730674 21140 main.go:141] libmachine: Decoding PEM data...
I0804 00:35:01.730690 21140 main.go:141] libmachine: Parsing certificate...
I0804 00:35:01.730714 21140 main.go:141] libmachine: Running pre-create checks...
I0804 00:35:01.730726 21140 main.go:141] libmachine: (ha-230158-m03) Calling .PreCreateCheck
I0804 00:35:01.730876 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetConfigRaw
I0804 00:35:01.732019 21140 main.go:141] libmachine: Creating machine...
I0804 00:35:01.732037 21140 main.go:141] libmachine: (ha-230158-m03) Calling .Create
I0804 00:35:01.732201 21140 main.go:141] libmachine: (ha-230158-m03) Creating KVM machine...
I0804 00:35:01.733430 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found existing default KVM network
I0804 00:35:01.733570 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found existing private KVM network mk-ha-230158
I0804 00:35:01.733660 21140 main.go:141] libmachine: (ha-230158-m03) Setting up store path in /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03 ...
I0804 00:35:01.733700 21140 main.go:141] libmachine: (ha-230158-m03) Building disk image from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
I0804 00:35:01.733750 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:01.733651 22024 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:35:01.733838 21140 main.go:141] libmachine: (ha-230158-m03) Downloading /home/jenkins/minikube-integration/19364-3947/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-3947/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
I0804 00:35:01.963276 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:01.963145 22024 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa...
I0804 00:35:02.150959 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:02.150818 22024 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/ha-230158-m03.rawdisk...
I0804 00:35:02.150990 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Writing magic tar header
I0804 00:35:02.151004 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Writing SSH key tar header
I0804 00:35:02.151017 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:02.150934 22024 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03 ...
I0804 00:35:02.151033 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03
I0804 00:35:02.151053 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube/machines
I0804 00:35:02.151069 21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03 (perms=drwx------)
I0804 00:35:02.151077 21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube/machines (perms=drwxr-xr-x)
I0804 00:35:02.151085 21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947/.minikube (perms=drwxr-xr-x)
I0804 00:35:02.151097 21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-3947 (perms=drwxrwxr-x)
I0804 00:35:02.151109 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947/.minikube
I0804 00:35:02.151121 21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0804 00:35:02.151136 21140 main.go:141] libmachine: (ha-230158-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0804 00:35:02.151147 21140 main.go:141] libmachine: (ha-230158-m03) Creating domain...
I0804 00:35:02.151185 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-3947
I0804 00:35:02.151213 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0804 00:35:02.151224 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home/jenkins
I0804 00:35:02.151238 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Checking permissions on dir: /home
I0804 00:35:02.151278 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Skipping /home - not owner
I0804 00:35:02.152048 21140 main.go:141] libmachine: (ha-230158-m03) define libvirt domain using xml:
I0804 00:35:02.152072 21140 main.go:141] libmachine: (ha-230158-m03) <domain type='kvm'>
I0804 00:35:02.152080 21140 main.go:141] libmachine: (ha-230158-m03) <name>ha-230158-m03</name>
I0804 00:35:02.152085 21140 main.go:141] libmachine: (ha-230158-m03) <memory unit='MiB'>2200</memory>
I0804 00:35:02.152090 21140 main.go:141] libmachine: (ha-230158-m03) <vcpu>2</vcpu>
I0804 00:35:02.152101 21140 main.go:141] libmachine: (ha-230158-m03) <features>
I0804 00:35:02.152120 21140 main.go:141] libmachine: (ha-230158-m03) <acpi/>
I0804 00:35:02.152127 21140 main.go:141] libmachine: (ha-230158-m03) <apic/>
I0804 00:35:02.152135 21140 main.go:141] libmachine: (ha-230158-m03) <pae/>
I0804 00:35:02.152153 21140 main.go:141] libmachine: (ha-230158-m03)
I0804 00:35:02.152164 21140 main.go:141] libmachine: (ha-230158-m03) </features>
I0804 00:35:02.152172 21140 main.go:141] libmachine: (ha-230158-m03) <cpu mode='host-passthrough'>
I0804 00:35:02.152181 21140 main.go:141] libmachine: (ha-230158-m03)
I0804 00:35:02.152186 21140 main.go:141] libmachine: (ha-230158-m03) </cpu>
I0804 00:35:02.152212 21140 main.go:141] libmachine: (ha-230158-m03) <os>
I0804 00:35:02.152219 21140 main.go:141] libmachine: (ha-230158-m03) <type>hvm</type>
I0804 00:35:02.152228 21140 main.go:141] libmachine: (ha-230158-m03) <boot dev='cdrom'/>
I0804 00:35:02.152238 21140 main.go:141] libmachine: (ha-230158-m03) <boot dev='hd'/>
I0804 00:35:02.152248 21140 main.go:141] libmachine: (ha-230158-m03) <bootmenu enable='no'/>
I0804 00:35:02.152262 21140 main.go:141] libmachine: (ha-230158-m03) </os>
I0804 00:35:02.152273 21140 main.go:141] libmachine: (ha-230158-m03) <devices>
I0804 00:35:02.152284 21140 main.go:141] libmachine: (ha-230158-m03) <disk type='file' device='cdrom'>
I0804 00:35:02.152296 21140 main.go:141] libmachine: (ha-230158-m03) <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/boot2docker.iso'/>
I0804 00:35:02.152306 21140 main.go:141] libmachine: (ha-230158-m03) <target dev='hdc' bus='scsi'/>
I0804 00:35:02.152315 21140 main.go:141] libmachine: (ha-230158-m03) <readonly/>
I0804 00:35:02.152329 21140 main.go:141] libmachine: (ha-230158-m03) </disk>
I0804 00:35:02.152340 21140 main.go:141] libmachine: (ha-230158-m03) <disk type='file' device='disk'>
I0804 00:35:02.152352 21140 main.go:141] libmachine: (ha-230158-m03) <driver name='qemu' type='raw' cache='default' io='threads' />
I0804 00:35:02.152365 21140 main.go:141] libmachine: (ha-230158-m03) <source file='/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/ha-230158-m03.rawdisk'/>
I0804 00:35:02.152374 21140 main.go:141] libmachine: (ha-230158-m03) <target dev='hda' bus='virtio'/>
I0804 00:35:02.152379 21140 main.go:141] libmachine: (ha-230158-m03) </disk>
I0804 00:35:02.152388 21140 main.go:141] libmachine: (ha-230158-m03) <interface type='network'>
I0804 00:35:02.152415 21140 main.go:141] libmachine: (ha-230158-m03) <source network='mk-ha-230158'/>
I0804 00:35:02.152442 21140 main.go:141] libmachine: (ha-230158-m03) <model type='virtio'/>
I0804 00:35:02.152464 21140 main.go:141] libmachine: (ha-230158-m03) </interface>
I0804 00:35:02.152481 21140 main.go:141] libmachine: (ha-230158-m03) <interface type='network'>
I0804 00:35:02.152495 21140 main.go:141] libmachine: (ha-230158-m03) <source network='default'/>
I0804 00:35:02.152503 21140 main.go:141] libmachine: (ha-230158-m03) <model type='virtio'/>
I0804 00:35:02.152511 21140 main.go:141] libmachine: (ha-230158-m03) </interface>
I0804 00:35:02.152517 21140 main.go:141] libmachine: (ha-230158-m03) <serial type='pty'>
I0804 00:35:02.152524 21140 main.go:141] libmachine: (ha-230158-m03) <target port='0'/>
I0804 00:35:02.152531 21140 main.go:141] libmachine: (ha-230158-m03) </serial>
I0804 00:35:02.152540 21140 main.go:141] libmachine: (ha-230158-m03) <console type='pty'>
I0804 00:35:02.152555 21140 main.go:141] libmachine: (ha-230158-m03) <target type='serial' port='0'/>
I0804 00:35:02.152566 21140 main.go:141] libmachine: (ha-230158-m03) </console>
I0804 00:35:02.152575 21140 main.go:141] libmachine: (ha-230158-m03) <rng model='virtio'>
I0804 00:35:02.152585 21140 main.go:141] libmachine: (ha-230158-m03) <backend model='random'>/dev/random</backend>
I0804 00:35:02.152594 21140 main.go:141] libmachine: (ha-230158-m03) </rng>
I0804 00:35:02.152602 21140 main.go:141] libmachine: (ha-230158-m03)
I0804 00:35:02.152608 21140 main.go:141] libmachine: (ha-230158-m03)
I0804 00:35:02.152615 21140 main.go:141] libmachine: (ha-230158-m03) </devices>
I0804 00:35:02.152627 21140 main.go:141] libmachine: (ha-230158-m03) </domain>
I0804 00:35:02.152641 21140 main.go:141] libmachine: (ha-230158-m03)
I0804 00:35:02.159019 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:5c:f5:c5 in network default
I0804 00:35:02.159725 21140 main.go:141] libmachine: (ha-230158-m03) Ensuring networks are active...
I0804 00:35:02.159747 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:02.160530 21140 main.go:141] libmachine: (ha-230158-m03) Ensuring network default is active
I0804 00:35:02.160945 21140 main.go:141] libmachine: (ha-230158-m03) Ensuring network mk-ha-230158 is active
I0804 00:35:02.161357 21140 main.go:141] libmachine: (ha-230158-m03) Getting domain xml...
I0804 00:35:02.162288 21140 main.go:141] libmachine: (ha-230158-m03) Creating domain...
I0804 00:35:03.416375 21140 main.go:141] libmachine: (ha-230158-m03) Waiting to get IP...
I0804 00:35:03.417184 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:03.417578 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:03.417618 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:03.417560 22024 retry.go:31] will retry after 274.137672ms: waiting for machine to come up
I0804 00:35:03.693121 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:03.693660 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:03.693689 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:03.693595 22024 retry.go:31] will retry after 356.003158ms: waiting for machine to come up
I0804 00:35:04.051100 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:04.051561 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:04.051600 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:04.051508 22024 retry.go:31] will retry after 385.228924ms: waiting for machine to come up
I0804 00:35:04.437907 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:04.438266 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:04.438294 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:04.438213 22024 retry.go:31] will retry after 587.872097ms: waiting for machine to come up
I0804 00:35:05.027968 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:05.028431 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:05.028462 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:05.028378 22024 retry.go:31] will retry after 473.396768ms: waiting for machine to come up
I0804 00:35:05.502979 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:05.503346 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:05.503377 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:05.503286 22024 retry.go:31] will retry after 888.791841ms: waiting for machine to come up
I0804 00:35:06.393433 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:06.393846 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:06.393879 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:06.393833 22024 retry.go:31] will retry after 800.330787ms: waiting for machine to come up
I0804 00:35:07.196097 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:07.196617 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:07.196645 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:07.196581 22024 retry.go:31] will retry after 1.350308245s: waiting for machine to come up
I0804 00:35:08.549064 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:08.549491 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:08.549517 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:08.549449 22024 retry.go:31] will retry after 1.414061347s: waiting for machine to come up
I0804 00:35:09.964954 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:09.965386 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:09.965415 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:09.965338 22024 retry.go:31] will retry after 2.016417552s: waiting for machine to come up
I0804 00:35:11.983856 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:11.984325 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:11.984359 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:11.984293 22024 retry.go:31] will retry after 2.735425811s: waiting for machine to come up
I0804 00:35:14.722954 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:14.723405 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:14.723426 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:14.723375 22024 retry.go:31] will retry after 3.588857245s: waiting for machine to come up
I0804 00:35:18.314440 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:18.314835 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find current IP address of domain ha-230158-m03 in network mk-ha-230158
I0804 00:35:18.314861 21140 main.go:141] libmachine: (ha-230158-m03) DBG | I0804 00:35:18.314796 22024 retry.go:31] will retry after 3.432629659s: waiting for machine to come up
I0804 00:35:21.748758 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:21.749225 21140 main.go:141] libmachine: (ha-230158-m03) Found IP for machine: 192.168.39.35
I0804 00:35:21.749253 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has current primary IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:21.749262 21140 main.go:141] libmachine: (ha-230158-m03) Reserving static IP address...
I0804 00:35:21.749675 21140 main.go:141] libmachine: (ha-230158-m03) DBG | unable to find host DHCP lease matching {name: "ha-230158-m03", mac: "52:54:00:df:27:1f", ip: "192.168.39.35"} in network mk-ha-230158
I0804 00:35:21.820226 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Getting to WaitForSSH function...
I0804 00:35:21.820257 21140 main.go:141] libmachine: (ha-230158-m03) Reserved static IP address: 192.168.39.35
I0804 00:35:21.820271 21140 main.go:141] libmachine: (ha-230158-m03) Waiting for SSH to be available...
I0804 00:35:21.822782 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:21.823219 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:27:1f}
I0804 00:35:21.823319 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:21.823340 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Using SSH client type: external
I0804 00:35:21.823356 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa (-rw-------)
I0804 00:35:21.823387 21140 main.go:141] libmachine: (ha-230158-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0804 00:35:21.823405 21140 main.go:141] libmachine: (ha-230158-m03) DBG | About to run SSH command:
I0804 00:35:21.823420 21140 main.go:141] libmachine: (ha-230158-m03) DBG | exit 0
I0804 00:35:21.942022 21140 main.go:141] libmachine: (ha-230158-m03) DBG | SSH cmd err, output: <nil>:
I0804 00:35:21.942330 21140 main.go:141] libmachine: (ha-230158-m03) KVM machine creation complete!
I0804 00:35:21.942785 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetConfigRaw
I0804 00:35:21.943409 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:21.943631 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:21.943818 21140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0804 00:35:21.943835 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetState
I0804 00:35:21.945112 21140 main.go:141] libmachine: Detecting operating system of created instance...
I0804 00:35:21.945135 21140 main.go:141] libmachine: Waiting for SSH to be available...
I0804 00:35:21.945141 21140 main.go:141] libmachine: Getting to WaitForSSH function...
I0804 00:35:21.945147 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:21.947237 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:21.947573 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:21.947603 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:21.947719 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:21.947889 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:21.948050 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:21.948187 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:21.948350 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:21.948535 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:21.948547 21140 main.go:141] libmachine: About to run SSH command:
exit 0
I0804 00:35:22.045614 21140 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:35:22.045637 21140 main.go:141] libmachine: Detecting the provisioner...
I0804 00:35:22.045645 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.048807 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.049223 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.049252 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.049375 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.049569 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.049792 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.049921 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.050099 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:22.050313 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:22.050326 21140 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0804 00:35:22.147137 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0804 00:35:22.147202 21140 main.go:141] libmachine: found compatible host: buildroot
I0804 00:35:22.147208 21140 main.go:141] libmachine: Provisioning with buildroot...
I0804 00:35:22.147216 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
I0804 00:35:22.147474 21140 buildroot.go:166] provisioning hostname "ha-230158-m03"
I0804 00:35:22.147499 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
I0804 00:35:22.147694 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.150147 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.150579 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.150601 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.150796 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.150958 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.151108 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.151221 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.151378 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:22.151550 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:22.151566 21140 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-230158-m03 && echo "ha-230158-m03" | sudo tee /etc/hostname
I0804 00:35:22.265955 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-230158-m03
I0804 00:35:22.265979 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.268571 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.268960 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.268992 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.269150 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.269317 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.269474 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.269644 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.269814 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:22.269964 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:22.269981 21140 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-230158-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-230158-m03/g' /etc/hosts;
else
echo '127.0.1.1 ha-230158-m03' | sudo tee -a /etc/hosts;
fi
fi
I0804 00:35:22.375879 21140 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0804 00:35:22.375906 21140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-3947/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-3947/.minikube}
I0804 00:35:22.375920 21140 buildroot.go:174] setting up certificates
I0804 00:35:22.375931 21140 provision.go:84] configureAuth start
I0804 00:35:22.375939 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetMachineName
I0804 00:35:22.376233 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:35:22.378696 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.379050 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.379079 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.379206 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.381767 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.382211 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.382254 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.382408 21140 provision.go:143] copyHostCerts
I0804 00:35:22.382433 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:35:22.382462 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem, removing ...
I0804 00:35:22.382469 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem
I0804 00:35:22.382538 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/ca.pem (1082 bytes)
I0804 00:35:22.382611 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:35:22.382630 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem, removing ...
I0804 00:35:22.382634 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem
I0804 00:35:22.382656 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/cert.pem (1123 bytes)
I0804 00:35:22.382696 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:35:22.382713 21140 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem, removing ...
I0804 00:35:22.382720 21140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem
I0804 00:35:22.382741 21140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-3947/.minikube/key.pem (1679 bytes)
I0804 00:35:22.382788 21140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem org=jenkins.ha-230158-m03 san=[127.0.0.1 192.168.39.35 ha-230158-m03 localhost minikube]
I0804 00:35:22.490503 21140 provision.go:177] copyRemoteCerts
I0804 00:35:22.490552 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0804 00:35:22.490574 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.492845 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.493117 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.493144 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.493295 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.493500 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.493649 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.493783 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:35:22.572548 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0804 00:35:22.572629 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0804 00:35:22.597372 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem -> /etc/docker/server.pem
I0804 00:35:22.597440 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0804 00:35:22.622258 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0804 00:35:22.622321 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0804 00:35:22.646173 21140 provision.go:87] duration metric: took 270.230572ms to configureAuth
I0804 00:35:22.646200 21140 buildroot.go:189] setting minikube options for container-runtime
I0804 00:35:22.646432 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:35:22.646456 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:22.646743 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.649357 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.649778 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.649807 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.649974 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.650150 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.650343 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.650467 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.650598 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:22.650751 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:22.650761 21140 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0804 00:35:22.752393 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0804 00:35:22.752413 21140 buildroot.go:70] root file system type: tmpfs
I0804 00:35:22.752526 21140 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0804 00:35:22.752547 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.755378 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.755730 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.755755 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.755890 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.756069 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.756225 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.756364 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.756544 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:22.756691 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:22.756751 21140 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.132"
Environment="NO_PROXY=192.168.39.132,192.168.39.188"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0804 00:35:22.870196 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.132
Environment=NO_PROXY=192.168.39.132,192.168.39.188
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0804 00:35:22.870256 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:22.872892 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.873134 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:22.873163 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:22.873347 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:22.873561 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.873716 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:22.873866 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:22.874030 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:22.874250 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:22.874280 21140 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0804 00:35:24.662565 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0804 00:35:24.662588 21140 main.go:141] libmachine: Checking connection to Docker...
I0804 00:35:24.662597 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetURL
I0804 00:35:24.663821 21140 main.go:141] libmachine: (ha-230158-m03) DBG | Using libvirt version 6000000
I0804 00:35:24.666698 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.667250 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.667290 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.667530 21140 main.go:141] libmachine: Docker is up and running!
I0804 00:35:24.667549 21140 main.go:141] libmachine: Reticulating splines...
I0804 00:35:24.667555 21140 client.go:171] duration metric: took 22.937060688s to LocalClient.Create
I0804 00:35:24.667576 21140 start.go:167] duration metric: took 22.937122865s to libmachine.API.Create "ha-230158"
I0804 00:35:24.667585 21140 start.go:293] postStartSetup for "ha-230158-m03" (driver="kvm2")
I0804 00:35:24.667593 21140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0804 00:35:24.667611 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:24.667873 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0804 00:35:24.667898 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:24.670379 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.670827 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.670854 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.671038 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:24.671209 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:24.671380 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:24.671528 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:35:24.760389 21140 ssh_runner.go:195] Run: cat /etc/os-release
I0804 00:35:24.769270 21140 info.go:137] Remote host: Buildroot 2023.02.9
I0804 00:35:24.769294 21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/addons for local assets ...
I0804 00:35:24.769356 21140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-3947/.minikube/files for local assets ...
I0804 00:35:24.769458 21140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> 111362.pem in /etc/ssl/certs
I0804 00:35:24.769470 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /etc/ssl/certs/111362.pem
I0804 00:35:24.769564 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0804 00:35:24.783201 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:35:24.806787 21140 start.go:296] duration metric: took 139.191095ms for postStartSetup
I0804 00:35:24.806880 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetConfigRaw
I0804 00:35:24.807425 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:35:24.810032 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.810420 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.810442 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.810659 21140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/config.json ...
I0804 00:35:24.810959 21140 start.go:128] duration metric: took 23.099032096s to createHost
I0804 00:35:24.810982 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:24.813730 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.814183 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.814207 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.814410 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:24.814594 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:24.814795 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:24.814975 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:24.815165 21140 main.go:141] libmachine: Using SSH client type: native
I0804 00:35:24.815390 21140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.35 22 <nil> <nil>}
I0804 00:35:24.815405 21140 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0804 00:35:24.918593 21140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731724.892479961
I0804 00:35:24.918614 21140 fix.go:216] guest clock: 1722731724.892479961
I0804 00:35:24.918624 21140 fix.go:229] Guest: 2024-08-04 00:35:24.892479961 +0000 UTC Remote: 2024-08-04 00:35:24.810971632 +0000 UTC m=+173.988054035 (delta=81.508329ms)
I0804 00:35:24.918642 21140 fix.go:200] guest clock delta is within tolerance: 81.508329ms
I0804 00:35:24.918647 21140 start.go:83] releasing machines lock for "ha-230158-m03", held for 23.206854929s
I0804 00:35:24.918663 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:24.918886 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:35:24.921314 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.921811 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.921841 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.923754 21140 out.go:177] * Found network options:
I0804 00:35:24.924902 21140 out.go:177] - NO_PROXY=192.168.39.132,192.168.39.188
W0804 00:35:24.925923 21140 proxy.go:119] fail to check proxy env: Error ip not in block
W0804 00:35:24.925944 21140 proxy.go:119] fail to check proxy env: Error ip not in block
I0804 00:35:24.925955 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:24.926479 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:24.926667 21140 main.go:141] libmachine: (ha-230158-m03) Calling .DriverName
I0804 00:35:24.926757 21140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0804 00:35:24.926796 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
W0804 00:35:24.926816 21140 proxy.go:119] fail to check proxy env: Error ip not in block
W0804 00:35:24.926838 21140 proxy.go:119] fail to check proxy env: Error ip not in block
I0804 00:35:24.926896 21140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0804 00:35:24.926913 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHHostname
I0804 00:35:24.929582 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.929653 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.929952 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.929977 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.930004 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:24.930020 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:24.930116 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:24.930210 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHPort
I0804 00:35:24.930328 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:24.930396 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHKeyPath
I0804 00:35:24.930458 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:24.930539 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetSSHUsername
I0804 00:35:24.930611 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
I0804 00:35:24.930658 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158-m03/id_rsa Username:docker}
W0804 00:35:25.025703 21140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0804 00:35:25.025786 21140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0804 00:35:25.044692 21140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0804 00:35:25.044716 21140 start.go:495] detecting cgroup driver to use...
I0804 00:35:25.044822 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:35:25.064747 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0804 00:35:25.076847 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0804 00:35:25.089218 21140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0804 00:35:25.089297 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0804 00:35:25.102072 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:35:25.114924 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0804 00:35:25.127305 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0804 00:35:25.139446 21140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0804 00:35:25.152031 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0804 00:35:25.165204 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0804 00:35:25.177111 21140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0804 00:35:25.188400 21140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0804 00:35:25.198087 21140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0804 00:35:25.208382 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:35:25.321490 21140 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0804 00:35:25.348001 21140 start.go:495] detecting cgroup driver to use...
I0804 00:35:25.348071 21140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0804 00:35:25.365037 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:35:25.379611 21140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0804 00:35:25.399009 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0804 00:35:25.412403 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:35:25.425550 21140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0804 00:35:25.457165 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0804 00:35:25.472279 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0804 00:35:25.490577 21140 ssh_runner.go:195] Run: which cri-dockerd
I0804 00:35:25.494212 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0804 00:35:25.503793 21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0804 00:35:25.520119 21140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0804 00:35:25.631730 21140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0804 00:35:25.748311 21140 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0804 00:35:25.748356 21140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0804 00:35:25.765922 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:35:25.887983 21140 ssh_runner.go:195] Run: sudo systemctl restart docker
I0804 00:35:28.267711 21140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.379695689s)
I0804 00:35:28.267783 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0804 00:35:28.280799 21140 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0804 00:35:28.297004 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0804 00:35:28.309602 21140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0804 00:35:28.421120 21140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0804 00:35:28.545022 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:35:28.673591 21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0804 00:35:28.691136 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0804 00:35:28.704181 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:35:28.819323 21140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0804 00:35:28.911154 21140 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0804 00:35:28.911254 21140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0804 00:35:28.916801 21140 start.go:563] Will wait 60s for crictl version
I0804 00:35:28.916847 21140 ssh_runner.go:195] Run: which crictl
I0804 00:35:28.920890 21140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0804 00:35:28.957669 21140 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0804 00:35:28.957733 21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0804 00:35:28.988116 21140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0804 00:35:29.014642 21140 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
I0804 00:35:29.015880 21140 out.go:177] - env NO_PROXY=192.168.39.132
I0804 00:35:29.017062 21140 out.go:177] - env NO_PROXY=192.168.39.132,192.168.39.188
I0804 00:35:29.018490 21140 main.go:141] libmachine: (ha-230158-m03) Calling .GetIP
I0804 00:35:29.021070 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:29.021419 21140 main.go:141] libmachine: (ha-230158-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:27:1f", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:35:16 +0000 UTC Type:0 Mac:52:54:00:df:27:1f Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-230158-m03 Clientid:01:52:54:00:df:27:1f}
I0804 00:35:29.021442 21140 main.go:141] libmachine: (ha-230158-m03) DBG | domain ha-230158-m03 has defined IP address 192.168.39.35 and MAC address 52:54:00:df:27:1f in network mk-ha-230158
I0804 00:35:29.021716 21140 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0804 00:35:29.025837 21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0804 00:35:29.038464 21140 mustload.go:65] Loading cluster: ha-230158
I0804 00:35:29.038684 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:35:29.038925 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:35:29.038959 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:35:29.053933 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42867
I0804 00:35:29.054405 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:35:29.054897 21140 main.go:141] libmachine: Using API Version 1
I0804 00:35:29.054914 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:35:29.055243 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:35:29.055464 21140 main.go:141] libmachine: (ha-230158) Calling .GetState
I0804 00:35:29.056933 21140 host.go:66] Checking if "ha-230158" exists ...
I0804 00:35:29.057254 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:35:29.057302 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:35:29.071693 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
I0804 00:35:29.072063 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:35:29.072507 21140 main.go:141] libmachine: Using API Version 1
I0804 00:35:29.072528 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:35:29.072818 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:35:29.073053 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:35:29.073265 21140 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158 for IP: 192.168.39.35
I0804 00:35:29.073276 21140 certs.go:194] generating shared ca certs ...
I0804 00:35:29.073300 21140 certs.go:226] acquiring lock for ca certs: {Name:mkffa482a260ec35b4e7e61a9f84c11349615c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:35:29.073423 21140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key
I0804 00:35:29.073476 21140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key
I0804 00:35:29.073489 21140 certs.go:256] generating profile certs ...
I0804 00:35:29.073578 21140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key
I0804 00:35:29.073611 21140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef
I0804 00:35:29.073632 21140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132 192.168.39.188 192.168.39.35 192.168.39.254]
I0804 00:35:29.192480 21140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef ...
I0804 00:35:29.192511 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef: {Name:mkace921321134e2c31957acee1a1e7265efc015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:35:29.192690 21140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef ...
I0804 00:35:29.192705 21140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef: {Name:mkb8a8c865fe06663f3162fa98a89ba246d74f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0804 00:35:29.192818 21140 certs.go:381] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt.07a968ef -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt
I0804 00:35:29.192972 21140 certs.go:385] copying /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key.07a968ef -> /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key
I0804 00:35:29.193092 21140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key
I0804 00:35:29.193106 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0804 00:35:29.193120 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0804 00:35:29.193133 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0804 00:35:29.193146 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0804 00:35:29.193159 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0804 00:35:29.193171 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0804 00:35:29.193183 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0804 00:35:29.193194 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0804 00:35:29.193239 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem (1338 bytes)
W0804 00:35:29.193267 21140 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136_empty.pem, impossibly tiny 0 bytes
I0804 00:35:29.193276 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca-key.pem (1679 bytes)
I0804 00:35:29.193305 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/ca.pem (1082 bytes)
I0804 00:35:29.193328 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/cert.pem (1123 bytes)
I0804 00:35:29.193348 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/key.pem (1679 bytes)
I0804 00:35:29.193424 21140 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem (1708 bytes)
I0804 00:35:29.193453 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0804 00:35:29.193467 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem -> /usr/share/ca-certificates/11136.pem
I0804 00:35:29.193479 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem -> /usr/share/ca-certificates/111362.pem
I0804 00:35:29.193506 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:35:29.196795 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:35:29.197229 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:35:29.197254 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:35:29.197484 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:35:29.197690 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:35:29.197860 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:35:29.198029 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:35:29.274602 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0804 00:35:29.280451 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0804 00:35:29.292613 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0804 00:35:29.297292 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0804 00:35:29.308604 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0804 00:35:29.315122 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0804 00:35:29.332152 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0804 00:35:29.337897 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I0804 00:35:29.350183 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0804 00:35:29.354439 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0804 00:35:29.366402 21140 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0804 00:35:29.370314 21140 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0804 00:35:29.381004 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0804 00:35:29.407430 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0804 00:35:29.432902 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0804 00:35:29.457997 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0804 00:35:29.482458 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I0804 00:35:29.506034 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0804 00:35:29.529331 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0804 00:35:29.552419 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0804 00:35:29.576496 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0804 00:35:29.600781 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/certs/11136.pem --> /usr/share/ca-certificates/11136.pem (1338 bytes)
I0804 00:35:29.623522 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/files/etc/ssl/certs/111362.pem --> /usr/share/ca-certificates/111362.pem (1708 bytes)
I0804 00:35:29.646258 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0804 00:35:29.662320 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0804 00:35:29.680498 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0804 00:35:29.697629 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I0804 00:35:29.713187 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0804 00:35:29.730685 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0804 00:35:29.747896 21140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0804 00:35:29.765188 21140 ssh_runner.go:195] Run: openssl version
I0804 00:35:29.770872 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0804 00:35:29.781638 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0804 00:35:29.786535 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 4 00:21 /usr/share/ca-certificates/minikubeCA.pem
I0804 00:35:29.786575 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0804 00:35:29.792427 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0804 00:35:29.803532 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11136.pem && ln -fs /usr/share/ca-certificates/11136.pem /etc/ssl/certs/11136.pem"
I0804 00:35:29.814595 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11136.pem
I0804 00:35:29.819477 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 4 00:28 /usr/share/ca-certificates/11136.pem
I0804 00:35:29.819521 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11136.pem
I0804 00:35:29.825367 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11136.pem /etc/ssl/certs/51391683.0"
I0804 00:35:29.836832 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111362.pem && ln -fs /usr/share/ca-certificates/111362.pem /etc/ssl/certs/111362.pem"
I0804 00:35:29.847347 21140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111362.pem
I0804 00:35:29.851772 21140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 4 00:28 /usr/share/ca-certificates/111362.pem
I0804 00:35:29.851818 21140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111362.pem
I0804 00:35:29.857592 21140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111362.pem /etc/ssl/certs/3ec20f2e.0"
I0804 00:35:29.868948 21140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0804 00:35:29.872798 21140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0804 00:35:29.872860 21140 kubeadm.go:934] updating node {m03 192.168.39.35 8443 v1.30.3 docker true true} ...
I0804 00:35:29.872962 21140 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-230158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0804 00:35:29.872997 21140 kube-vip.go:115] generating kube-vip config ...
I0804 00:35:29.873028 21140 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0804 00:35:29.888413 21140 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0804 00:35:29.888496 21140 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0804 00:35:29.888563 21140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0804 00:35:29.898226 21140 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
Initiating transfer...
I0804 00:35:29.898292 21140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
I0804 00:35:29.907477 21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
I0804 00:35:29.907501 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
I0804 00:35:29.907549 21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
I0804 00:35:29.907481 21140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
I0804 00:35:29.907583 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
I0804 00:35:29.907605 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
I0804 00:35:29.907596 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:35:29.907687 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
I0804 00:35:29.920969 21140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
I0804 00:35:29.921004 21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
I0804 00:35:29.921030 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
I0804 00:35:29.921052 21140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
I0804 00:35:29.921052 21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
I0804 00:35:29.921083 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
I0804 00:35:29.933235 21140 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
I0804 00:35:29.933273 21140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-3947/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
I0804 00:35:30.833563 21140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0804 00:35:30.843265 21140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0804 00:35:30.861374 21140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0804 00:35:30.877889 21140 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0804 00:35:30.894388 21140 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0804 00:35:30.898175 21140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0804 00:35:30.910151 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:35:31.026639 21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0804 00:35:31.050034 21140 host.go:66] Checking if "ha-230158" exists ...
I0804 00:35:31.050392 21140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0804 00:35:31.050429 21140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 00:35:31.065656 21140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
I0804 00:35:31.066112 21140 main.go:141] libmachine: () Calling .GetVersion
I0804 00:35:31.066671 21140 main.go:141] libmachine: Using API Version 1
I0804 00:35:31.066692 21140 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 00:35:31.066998 21140 main.go:141] libmachine: () Calling .GetMachineName
I0804 00:35:31.067201 21140 main.go:141] libmachine: (ha-230158) Calling .DriverName
I0804 00:35:31.067363 21140 start.go:317] joinCluster: &{Name:ha-230158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-230158 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.188 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0804 00:35:31.067533 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
I0804 00:35:31.067557 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHHostname
I0804 00:35:31.070882 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:35:31.071389 21140 main.go:141] libmachine: (ha-230158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:92:75", ip: ""} in network mk-ha-230158: {Iface:virbr1 ExpiryTime:2024-08-04 01:32:45 +0000 UTC Type:0 Mac:52:54:00:a9:92:75 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-230158 Clientid:01:52:54:00:a9:92:75}
I0804 00:35:31.071417 21140 main.go:141] libmachine: (ha-230158) DBG | domain ha-230158 has defined IP address 192.168.39.132 and MAC address 52:54:00:a9:92:75 in network mk-ha-230158
I0804 00:35:31.071587 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHPort
I0804 00:35:31.071755 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHKeyPath
I0804 00:35:31.071920 21140 main.go:141] libmachine: (ha-230158) Calling .GetSSHUsername
I0804 00:35:31.072074 21140 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-3947/.minikube/machines/ha-230158/id_rsa Username:docker}
I0804 00:35:31.263029 21140 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:35:31.263071 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mf76a4.t8pat0uzu8mjr998 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m03 --control-plane --apiserver-advertise-address=192.168.39.35 --apiserver-bind-port=8443"
I0804 00:35:55.597900 21140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mf76a4.t8pat0uzu8mjr998 --discovery-token-ca-cert-hash sha256:df45234da77be7664dbc18ef5748e1cb4d47aa47bd0026b7b7a8eef37767d0f0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-230158-m03 --control-plane --apiserver-advertise-address=192.168.39.35 --apiserver-bind-port=8443": (24.334796454s)
I0804 00:35:55.597941 21140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0804 00:35:56.245827 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-230158-m03 minikube.k8s.io/updated_at=2024_08_04T00_35_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-230158 minikube.k8s.io/primary=false
I0804 00:35:56.401002 21140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-230158-m03 node-role.kubernetes.io/control-plane:NoSchedule-
I0804 00:35:56.514911 21140 start.go:319] duration metric: took 25.447543043s to joinCluster
I0804 00:35:56.514984 21140 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0804 00:35:56.515219 21140 config.go:182] Loaded profile config "ha-230158": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0804 00:35:56.516163 21140 out.go:177] * Verifying Kubernetes components...
I0804 00:35:56.517473 21140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0804 00:35:56.792729 21140 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0804 00:35:56.812319 21140 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19364-3947/kubeconfig
I0804 00:35:56.812567 21140 kapi.go:59] client config for ha-230158: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/profiles/ha-230158/client.key", CAFile:"/home/jenkins/minikube-integration/19364-3947/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0804 00:35:56.812625 21140 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.132:8443
I0804 00:35:56.812837 21140 node_ready.go:35] waiting up to 6m0s for node "ha-230158-m03" to be "Ready" ...
I0804 00:35:56.812921 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:56.812931 21140 round_trippers.go:469] Request Headers:
I0804 00:35:56.812942 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:56.812951 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:56.822186 21140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0804 00:35:57.313604 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:57.313624 21140 round_trippers.go:469] Request Headers:
I0804 00:35:57.313634 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:57.313642 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:57.316816 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:57.813650 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:57.813681 21140 round_trippers.go:469] Request Headers:
I0804 00:35:57.813717 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:57.813728 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:57.817575 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:58.313029 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:58.313053 21140 round_trippers.go:469] Request Headers:
I0804 00:35:58.313063 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:58.313069 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:58.317051 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:58.813933 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:58.813954 21140 round_trippers.go:469] Request Headers:
I0804 00:35:58.813962 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:58.813966 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:58.817153 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:35:58.817626 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:35:59.313955 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:59.313984 21140 round_trippers.go:469] Request Headers:
I0804 00:35:59.313995 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:59.314002 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:59.320217 21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0804 00:35:59.813653 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:35:59.813672 21140 round_trippers.go:469] Request Headers:
I0804 00:35:59.813683 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:35:59.813691 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:35:59.818951 21140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0804 00:36:00.313942 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:00.313967 21140 round_trippers.go:469] Request Headers:
I0804 00:36:00.313979 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:00.313984 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:00.317522 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:00.813432 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:00.813454 21140 round_trippers.go:469] Request Headers:
I0804 00:36:00.813464 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:00.813468 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:00.819638 21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0804 00:36:00.820329 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:36:01.313898 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:01.313921 21140 round_trippers.go:469] Request Headers:
I0804 00:36:01.313929 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:01.313933 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:01.317348 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:01.813889 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:01.813917 21140 round_trippers.go:469] Request Headers:
I0804 00:36:01.813929 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:01.813936 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:01.817472 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:02.313397 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:02.313418 21140 round_trippers.go:469] Request Headers:
I0804 00:36:02.313428 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:02.313433 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:02.317733 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:36:02.813698 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:02.813719 21140 round_trippers.go:469] Request Headers:
I0804 00:36:02.813736 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:02.813742 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:02.816874 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:03.313553 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:03.313581 21140 round_trippers.go:469] Request Headers:
I0804 00:36:03.313588 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:03.313592 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:03.316863 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:03.317654 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:36:03.813066 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:03.813088 21140 round_trippers.go:469] Request Headers:
I0804 00:36:03.813095 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:03.813098 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:03.816776 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:04.313754 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:04.313776 21140 round_trippers.go:469] Request Headers:
I0804 00:36:04.313784 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:04.313788 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:04.317202 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:04.813607 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:04.813634 21140 round_trippers.go:469] Request Headers:
I0804 00:36:04.813645 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:04.813656 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:04.817037 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:05.313378 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:05.313400 21140 round_trippers.go:469] Request Headers:
I0804 00:36:05.313408 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:05.313413 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:05.317269 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:05.317787 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:36:05.813093 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:05.813116 21140 round_trippers.go:469] Request Headers:
I0804 00:36:05.813124 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:05.813127 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:05.816797 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:06.313024 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:06.313045 21140 round_trippers.go:469] Request Headers:
I0804 00:36:06.313054 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:06.313060 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:06.316202 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:06.813564 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:06.813585 21140 round_trippers.go:469] Request Headers:
I0804 00:36:06.813596 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:06.813600 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:06.817030 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:07.313010 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:07.313036 21140 round_trippers.go:469] Request Headers:
I0804 00:36:07.313046 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:07.313051 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:07.316498 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:07.813775 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:07.813796 21140 round_trippers.go:469] Request Headers:
I0804 00:36:07.813802 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:07.813809 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:07.817261 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:07.818039 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:36:08.313869 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:08.313890 21140 round_trippers.go:469] Request Headers:
I0804 00:36:08.313901 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:08.313905 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:08.317961 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:36:08.813891 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:08.813912 21140 round_trippers.go:469] Request Headers:
I0804 00:36:08.813920 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:08.813925 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:08.817500 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:09.313373 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:09.313395 21140 round_trippers.go:469] Request Headers:
I0804 00:36:09.313402 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:09.313407 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:09.316594 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:09.813936 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:09.813955 21140 round_trippers.go:469] Request Headers:
I0804 00:36:09.813962 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:09.813967 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:09.817354 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:10.313416 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:10.313443 21140 round_trippers.go:469] Request Headers:
I0804 00:36:10.313451 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:10.313455 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:10.316728 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:10.317273 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:36:10.813608 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:10.813628 21140 round_trippers.go:469] Request Headers:
I0804 00:36:10.813635 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:10.813642 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:10.816719 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:11.313687 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:11.313766 21140 round_trippers.go:469] Request Headers:
I0804 00:36:11.313787 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:11.313801 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:11.317800 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:11.813156 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:11.813176 21140 round_trippers.go:469] Request Headers:
I0804 00:36:11.813184 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:11.813187 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:11.816530 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:12.313362 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:12.313384 21140 round_trippers.go:469] Request Headers:
I0804 00:36:12.313393 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:12.313397 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:12.316689 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:12.317653 21140 node_ready.go:53] node "ha-230158-m03" has status "Ready":"False"
I0804 00:36:12.813988 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:12.814026 21140 round_trippers.go:469] Request Headers:
I0804 00:36:12.814037 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:12.814043 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:12.817685 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:13.313096 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:13.313121 21140 round_trippers.go:469] Request Headers:
I0804 00:36:13.313132 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:13.313139 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:13.316170 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:13.813487 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:13.813507 21140 round_trippers.go:469] Request Headers:
I0804 00:36:13.813519 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:13.813522 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:13.817165 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:14.313242 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:14.313271 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.313283 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.313291 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.316695 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:14.317572 21140 node_ready.go:49] node "ha-230158-m03" has status "Ready":"True"
I0804 00:36:14.317594 21140 node_ready.go:38] duration metric: took 17.504738279s for node "ha-230158-m03" to be "Ready" ...
I0804 00:36:14.317604 21140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0804 00:36:14.317669 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:36:14.317682 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.317689 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.317693 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.324294 21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0804 00:36:14.330984 21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.331072 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cqbjc
I0804 00:36:14.331083 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.331089 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.331094 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.334136 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:14.335282 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:14.335301 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.335312 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.335319 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.338942 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:14.339726 21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:14.339743 21140 pod_ready.go:81] duration metric: took 8.732646ms for pod "coredns-7db6d8ff4d-cqbjc" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.339752 21140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.339795 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xt2gb
I0804 00:36:14.339803 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.339809 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.339813 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.342514 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:14.343458 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:14.343472 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.343479 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.343483 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.345977 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:14.346711 21140 pod_ready.go:92] pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:14.346729 21140 pod_ready.go:81] duration metric: took 6.970575ms for pod "coredns-7db6d8ff4d-xt2gb" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.346738 21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.346793 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158
I0804 00:36:14.346803 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.346814 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.346822 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.349116 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:14.349889 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:14.349903 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.349912 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.349918 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.352287 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:14.352752 21140 pod_ready.go:92] pod "etcd-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:14.352768 21140 pod_ready.go:81] duration metric: took 6.022837ms for pod "etcd-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.352776 21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.352823 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m02
I0804 00:36:14.352833 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.352840 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.352845 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.355251 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:14.356139 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:14.356154 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.356162 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.356168 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.358804 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:14.359221 21140 pod_ready.go:92] pod "etcd-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:14.359236 21140 pod_ready.go:81] duration metric: took 6.450652ms for pod "etcd-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.359246 21140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.513671 21140 request.go:629] Waited for 154.368864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m03
I0804 00:36:14.513765 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/etcd-ha-230158-m03
I0804 00:36:14.513774 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.513794 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.513811 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.517308 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:14.713219 21140 request.go:629] Waited for 195.282606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:14.713271 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:14.713278 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.713287 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.713292 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.717115 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:14.717626 21140 pod_ready.go:92] pod "etcd-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:14.717649 21140 pod_ready.go:81] duration metric: took 358.394373ms for pod "etcd-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.717671 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:14.913544 21140 request.go:629] Waited for 195.774852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
I0804 00:36:14.913606 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158
I0804 00:36:14.913611 21140 round_trippers.go:469] Request Headers:
I0804 00:36:14.913620 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:14.913627 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:14.916592 21140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0804 00:36:15.113797 21140 request.go:629] Waited for 196.366235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:15.113873 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:15.113880 21140 round_trippers.go:469] Request Headers:
I0804 00:36:15.113890 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:15.113897 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:15.117750 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:15.118590 21140 pod_ready.go:92] pod "kube-apiserver-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:15.118608 21140 pod_ready.go:81] duration metric: took 400.926261ms for pod "kube-apiserver-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:15.118618 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:15.313717 21140 request.go:629] Waited for 195.03725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
I0804 00:36:15.313792 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m02
I0804 00:36:15.313797 21140 round_trippers.go:469] Request Headers:
I0804 00:36:15.313805 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:15.313808 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:15.317077 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:15.513361 21140 request.go:629] Waited for 195.280512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:15.513441 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:15.513458 21140 round_trippers.go:469] Request Headers:
I0804 00:36:15.513471 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:15.513485 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:15.517319 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:15.517959 21140 pod_ready.go:92] pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:15.517975 21140 pod_ready.go:81] duration metric: took 399.350403ms for pod "kube-apiserver-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:15.517987 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:15.714166 21140 request.go:629] Waited for 196.119755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m03
I0804 00:36:15.714246 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-230158-m03
I0804 00:36:15.714254 21140 round_trippers.go:469] Request Headers:
I0804 00:36:15.714269 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:15.714277 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:15.717553 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:15.913485 21140 request.go:629] Waited for 195.02327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:15.913563 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:15.913572 21140 round_trippers.go:469] Request Headers:
I0804 00:36:15.913584 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:15.913595 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:15.916620 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:15.917187 21140 pod_ready.go:92] pod "kube-apiserver-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:15.917207 21140 pod_ready.go:81] duration metric: took 399.213201ms for pod "kube-apiserver-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:15.917217 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:16.113928 21140 request.go:629] Waited for 196.6406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
I0804 00:36:16.114040 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158
I0804 00:36:16.114055 21140 round_trippers.go:469] Request Headers:
I0804 00:36:16.114064 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:16.114074 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:16.118199 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:36:16.314136 21140 request.go:629] Waited for 193.357767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:16.314194 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:16.314199 21140 round_trippers.go:469] Request Headers:
I0804 00:36:16.314207 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:16.314211 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:16.317233 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:16.318043 21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:16.318063 21140 pod_ready.go:81] duration metric: took 400.838103ms for pod "kube-controller-manager-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:16.318077 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:16.514190 21140 request.go:629] Waited for 196.049158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
I0804 00:36:16.514284 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m02
I0804 00:36:16.514291 21140 round_trippers.go:469] Request Headers:
I0804 00:36:16.514299 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:16.514307 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:16.517440 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:16.713372 21140 request.go:629] Waited for 195.27709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:16.713422 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:16.713428 21140 round_trippers.go:469] Request Headers:
I0804 00:36:16.713459 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:16.713467 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:16.717134 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:16.717887 21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:16.717903 21140 pod_ready.go:81] duration metric: took 399.816963ms for pod "kube-controller-manager-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:16.717913 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:16.913366 21140 request.go:629] Waited for 195.375288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m03
I0804 00:36:16.913421 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-230158-m03
I0804 00:36:16.913427 21140 round_trippers.go:469] Request Headers:
I0804 00:36:16.913434 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:16.913452 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:16.917008 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:17.114002 21140 request.go:629] Waited for 196.360087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:17.114062 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:17.114083 21140 round_trippers.go:469] Request Headers:
I0804 00:36:17.114094 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:17.114099 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:17.118060 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:17.118868 21140 pod_ready.go:92] pod "kube-controller-manager-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:17.118889 21140 pod_ready.go:81] duration metric: took 400.967735ms for pod "kube-controller-manager-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:17.118898 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
I0804 00:36:17.313818 21140 request.go:629] Waited for 194.852885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
I0804 00:36:17.313892 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tgp2
I0804 00:36:17.313903 21140 round_trippers.go:469] Request Headers:
I0804 00:36:17.313914 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:17.313926 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:17.317347 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:17.513381 21140 request.go:629] Waited for 195.279495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:17.513450 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:17.513455 21140 round_trippers.go:469] Request Headers:
I0804 00:36:17.513463 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:17.513466 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:17.517059 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:17.517804 21140 pod_ready.go:92] pod "kube-proxy-8tgp2" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:17.517823 21140 pod_ready.go:81] duration metric: took 398.918885ms for pod "kube-proxy-8tgp2" in "kube-system" namespace to be "Ready" ...
I0804 00:36:17.517832 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llxx2" in "kube-system" namespace to be "Ready" ...
I0804 00:36:17.713990 21140 request.go:629] Waited for 196.084751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llxx2
I0804 00:36:17.714051 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llxx2
I0804 00:36:17.714058 21140 round_trippers.go:469] Request Headers:
I0804 00:36:17.714067 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:17.714072 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:17.717314 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:17.913369 21140 request.go:629] Waited for 195.291585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:17.913427 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:17.913432 21140 round_trippers.go:469] Request Headers:
I0804 00:36:17.913452 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:17.913459 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:17.917761 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:36:17.918331 21140 pod_ready.go:92] pod "kube-proxy-llxx2" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:17.918350 21140 pod_ready.go:81] duration metric: took 400.511651ms for pod "kube-proxy-llxx2" in "kube-system" namespace to be "Ready" ...
I0804 00:36:17.918358 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
I0804 00:36:18.113862 21140 request.go:629] Waited for 195.443141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
I0804 00:36:18.113931 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdn92
I0804 00:36:18.113947 21140 round_trippers.go:469] Request Headers:
I0804 00:36:18.113959 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:18.113967 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:18.118260 21140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0804 00:36:18.313934 21140 request.go:629] Waited for 194.230466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:18.313994 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:18.314001 21140 round_trippers.go:469] Request Headers:
I0804 00:36:18.314017 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:18.314030 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:18.317513 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:18.318372 21140 pod_ready.go:92] pod "kube-proxy-vdn92" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:18.318391 21140 pod_ready.go:81] duration metric: took 400.027057ms for pod "kube-proxy-vdn92" in "kube-system" namespace to be "Ready" ...
I0804 00:36:18.318402 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:18.513379 21140 request.go:629] Waited for 194.888882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
I0804 00:36:18.513443 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158
I0804 00:36:18.513452 21140 round_trippers.go:469] Request Headers:
I0804 00:36:18.513461 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:18.513470 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:18.516837 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:18.714010 21140 request.go:629] Waited for 196.366502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:18.714127 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158
I0804 00:36:18.714142 21140 round_trippers.go:469] Request Headers:
I0804 00:36:18.714152 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:18.714161 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:18.718093 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:18.718711 21140 pod_ready.go:92] pod "kube-scheduler-ha-230158" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:18.718732 21140 pod_ready.go:81] duration metric: took 400.322513ms for pod "kube-scheduler-ha-230158" in "kube-system" namespace to be "Ready" ...
I0804 00:36:18.718744 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:18.913928 21140 request.go:629] Waited for 195.096761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
I0804 00:36:18.913992 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m02
I0804 00:36:18.913998 21140 round_trippers.go:469] Request Headers:
I0804 00:36:18.914006 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:18.914012 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:18.917481 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:19.114007 21140 request.go:629] Waited for 195.769588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:19.114057 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m02
I0804 00:36:19.114062 21140 round_trippers.go:469] Request Headers:
I0804 00:36:19.114070 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:19.114074 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:19.117807 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:19.118355 21140 pod_ready.go:92] pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:19.118372 21140 pod_ready.go:81] duration metric: took 399.621886ms for pod "kube-scheduler-ha-230158-m02" in "kube-system" namespace to be "Ready" ...
I0804 00:36:19.118382 21140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:19.313596 21140 request.go:629] Waited for 195.149418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m03
I0804 00:36:19.313674 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-230158-m03
I0804 00:36:19.313680 21140 round_trippers.go:469] Request Headers:
I0804 00:36:19.313687 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:19.313691 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:19.317150 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:19.514026 21140 request.go:629] Waited for 196.255241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:19.514116 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes/ha-230158-m03
I0804 00:36:19.514126 21140 round_trippers.go:469] Request Headers:
I0804 00:36:19.514134 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:19.514137 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:19.517549 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:19.518084 21140 pod_ready.go:92] pod "kube-scheduler-ha-230158-m03" in "kube-system" namespace has status "Ready":"True"
I0804 00:36:19.518102 21140 pod_ready.go:81] duration metric: took 399.712625ms for pod "kube-scheduler-ha-230158-m03" in "kube-system" namespace to be "Ready" ...
I0804 00:36:19.518112 21140 pod_ready.go:38] duration metric: took 5.20049857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0804 00:36:19.518128 21140 api_server.go:52] waiting for apiserver process to appear ...
I0804 00:36:19.518177 21140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0804 00:36:19.535013 21140 api_server.go:72] duration metric: took 23.019993564s to wait for apiserver process to appear ...
I0804 00:36:19.535039 21140 api_server.go:88] waiting for apiserver healthz status ...
I0804 00:36:19.535059 21140 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
I0804 00:36:19.545694 21140 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
ok
I0804 00:36:19.545771 21140 round_trippers.go:463] GET https://192.168.39.132:8443/version
I0804 00:36:19.545782 21140 round_trippers.go:469] Request Headers:
I0804 00:36:19.545792 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:19.545799 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:19.546739 21140 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0804 00:36:19.546810 21140 api_server.go:141] control plane version: v1.30.3
I0804 00:36:19.546827 21140 api_server.go:131] duration metric: took 11.780862ms to wait for apiserver health ...
I0804 00:36:19.546837 21140 system_pods.go:43] waiting for kube-system pods to appear ...
I0804 00:36:19.714166 21140 request.go:629] Waited for 167.261084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:36:19.714216 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:36:19.714221 21140 round_trippers.go:469] Request Headers:
I0804 00:36:19.714242 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:19.714247 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:19.720934 21140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0804 00:36:19.729908 21140 system_pods.go:59] 24 kube-system pods found
I0804 00:36:19.729934 21140 system_pods.go:61] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
I0804 00:36:19.729939 21140 system_pods.go:61] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
I0804 00:36:19.729943 21140 system_pods.go:61] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
I0804 00:36:19.729947 21140 system_pods.go:61] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
I0804 00:36:19.729950 21140 system_pods.go:61] "etcd-ha-230158-m03" [46db3fc8-2779-48a0-94dc-547182e460aa] Running
I0804 00:36:19.729953 21140 system_pods.go:61] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
I0804 00:36:19.729956 21140 system_pods.go:61] "kindnet-w86v4" [1435af28-2e6c-4fa4-8315-00d18be70d00] Running
I0804 00:36:19.729959 21140 system_pods.go:61] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
I0804 00:36:19.729963 21140 system_pods.go:61] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
I0804 00:36:19.729966 21140 system_pods.go:61] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
I0804 00:36:19.729969 21140 system_pods.go:61] "kube-apiserver-ha-230158-m03" [3a2f9422-7354-47e1-87cc-988fd0e44316] Running
I0804 00:36:19.729972 21140 system_pods.go:61] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
I0804 00:36:19.729975 21140 system_pods.go:61] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
I0804 00:36:19.729979 21140 system_pods.go:61] "kube-controller-manager-ha-230158-m03" [c49084bd-2f5d-495b-ba60-9861b0681e5e] Running
I0804 00:36:19.729982 21140 system_pods.go:61] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
I0804 00:36:19.729988 21140 system_pods.go:61] "kube-proxy-llxx2" [b9fbc18d-404d-4733-a31b-d95ab7e04dfd] Running
I0804 00:36:19.729990 21140 system_pods.go:61] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
I0804 00:36:19.729993 21140 system_pods.go:61] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
I0804 00:36:19.729997 21140 system_pods.go:61] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
I0804 00:36:19.730000 21140 system_pods.go:61] "kube-scheduler-ha-230158-m03" [d5f8d184-aa92-4e8b-912d-788ccb98fe32] Running
I0804 00:36:19.730003 21140 system_pods.go:61] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
I0804 00:36:19.730006 21140 system_pods.go:61] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
I0804 00:36:19.730009 21140 system_pods.go:61] "kube-vip-ha-230158-m03" [d8bb79c6-6ae4-47e2-ad7b-e731f070228c] Running
I0804 00:36:19.730012 21140 system_pods.go:61] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
I0804 00:36:19.730020 21140 system_pods.go:74] duration metric: took 183.175097ms to wait for pod list to return data ...
I0804 00:36:19.730029 21140 default_sa.go:34] waiting for default service account to be created ...
I0804 00:36:19.913280 21140 request.go:629] Waited for 183.162867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
I0804 00:36:19.913337 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/default/serviceaccounts
I0804 00:36:19.913348 21140 round_trippers.go:469] Request Headers:
I0804 00:36:19.913358 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:19.913362 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:19.916500 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:19.916624 21140 default_sa.go:45] found service account: "default"
I0804 00:36:19.916638 21140 default_sa.go:55] duration metric: took 186.603168ms for default service account to be created ...
I0804 00:36:19.916645 21140 system_pods.go:116] waiting for k8s-apps to be running ...
I0804 00:36:20.114154 21140 request.go:629] Waited for 197.446057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:36:20.114216 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/namespaces/kube-system/pods
I0804 00:36:20.114224 21140 round_trippers.go:469] Request Headers:
I0804 00:36:20.114258 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:20.114267 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:20.127325 21140 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0804 00:36:20.136207 21140 system_pods.go:86] 24 kube-system pods found
I0804 00:36:20.136232 21140 system_pods.go:89] "coredns-7db6d8ff4d-cqbjc" [d99b5cde-3b5b-4c29-82c4-ec9fa36b4479] Running
I0804 00:36:20.136238 21140 system_pods.go:89] "coredns-7db6d8ff4d-xt2gb" [2bd541a1-7bf0-4709-b600-365d5527b936] Running
I0804 00:36:20.136242 21140 system_pods.go:89] "etcd-ha-230158" [dc6a8dde-229d-4857-8f08-dcc8399b1420] Running
I0804 00:36:20.136246 21140 system_pods.go:89] "etcd-ha-230158-m02" [ed2085f3-8b06-4e15-8ed3-bd434d9aaebb] Running
I0804 00:36:20.136250 21140 system_pods.go:89] "etcd-ha-230158-m03" [46db3fc8-2779-48a0-94dc-547182e460aa] Running
I0804 00:36:20.136254 21140 system_pods.go:89] "kindnet-n5cql" [56108054-acd3-48ae-b929-75bd31cbd1ad] Running
I0804 00:36:20.136257 21140 system_pods.go:89] "kindnet-w86v4" [1435af28-2e6c-4fa4-8315-00d18be70d00] Running
I0804 00:36:20.136262 21140 system_pods.go:89] "kindnet-wfd5t" [b7ccd328-13aa-4161-8a20-5df8d153592f] Running
I0804 00:36:20.136266 21140 system_pods.go:89] "kube-apiserver-ha-230158" [8c1d6b4d-e30e-4b30-84ff-f53490a7d9ec] Running
I0804 00:36:20.136270 21140 system_pods.go:89] "kube-apiserver-ha-230158-m02" [8d384508-62d2-450a-a512-622aac96913a] Running
I0804 00:36:20.136274 21140 system_pods.go:89] "kube-apiserver-ha-230158-m03" [3a2f9422-7354-47e1-87cc-988fd0e44316] Running
I0804 00:36:20.136278 21140 system_pods.go:89] "kube-controller-manager-ha-230158" [cf39dcfb-ca37-45e7-9306-456ea22b484c] Running
I0804 00:36:20.136286 21140 system_pods.go:89] "kube-controller-manager-ha-230158-m02" [c751903c-cb15-4718-87d7-f9ccf79d5869] Running
I0804 00:36:20.136289 21140 system_pods.go:89] "kube-controller-manager-ha-230158-m03" [c49084bd-2f5d-495b-ba60-9861b0681e5e] Running
I0804 00:36:20.136293 21140 system_pods.go:89] "kube-proxy-8tgp2" [17ce55b9-8d25-4b4a-9b12-ff2cb84c22fa] Running
I0804 00:36:20.136298 21140 system_pods.go:89] "kube-proxy-llxx2" [b9fbc18d-404d-4733-a31b-d95ab7e04dfd] Running
I0804 00:36:20.136301 21140 system_pods.go:89] "kube-proxy-vdn92" [02c77eda-8f0e-49d4-ae42-bbf18d0eeaf5] Running
I0804 00:36:20.136305 21140 system_pods.go:89] "kube-scheduler-ha-230158" [c24d7658-a418-4a21-8e93-e31af5d65e05] Running
I0804 00:36:20.136310 21140 system_pods.go:89] "kube-scheduler-ha-230158-m02" [97d10375-f0ca-4e13-bc7b-8d775aea4678] Running
I0804 00:36:20.136315 21140 system_pods.go:89] "kube-scheduler-ha-230158-m03" [d5f8d184-aa92-4e8b-912d-788ccb98fe32] Running
I0804 00:36:20.136319 21140 system_pods.go:89] "kube-vip-ha-230158" [f784b7b5-0db7-49f2-bcac-3a0dbeee74dd] Running
I0804 00:36:20.136323 21140 system_pods.go:89] "kube-vip-ha-230158-m02" [0c04a6aa-7d79-4318-9cd7-b936d3358e19] Running
I0804 00:36:20.136330 21140 system_pods.go:89] "kube-vip-ha-230158-m03" [d8bb79c6-6ae4-47e2-ad7b-e731f070228c] Running
I0804 00:36:20.136333 21140 system_pods.go:89] "storage-provisioner" [653e0c50-af0a-4708-aaa9-b0d63616df94] Running
I0804 00:36:20.136339 21140 system_pods.go:126] duration metric: took 219.689305ms to wait for k8s-apps to be running ...
I0804 00:36:20.136348 21140 system_svc.go:44] waiting for kubelet service to be running ....
I0804 00:36:20.136386 21140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0804 00:36:20.153198 21140 system_svc.go:56] duration metric: took 16.84159ms WaitForService to wait for kubelet
I0804 00:36:20.153240 21140 kubeadm.go:582] duration metric: took 23.638221933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0804 00:36:20.153270 21140 node_conditions.go:102] verifying NodePressure condition ...
I0804 00:36:20.313629 21140 request.go:629] Waited for 160.279047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.132:8443/api/v1/nodes
I0804 00:36:20.313684 21140 round_trippers.go:463] GET https://192.168.39.132:8443/api/v1/nodes
I0804 00:36:20.313690 21140 round_trippers.go:469] Request Headers:
I0804 00:36:20.313697 21140 round_trippers.go:473] Accept: application/json, */*
I0804 00:36:20.313702 21140 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0804 00:36:20.317377 21140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0804 00:36:20.318808 21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0804 00:36:20.318827 21140 node_conditions.go:123] node cpu capacity is 2
I0804 00:36:20.318839 21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0804 00:36:20.318842 21140 node_conditions.go:123] node cpu capacity is 2
I0804 00:36:20.318845 21140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0804 00:36:20.318848 21140 node_conditions.go:123] node cpu capacity is 2
I0804 00:36:20.318851 21140 node_conditions.go:105] duration metric: took 165.576428ms to run NodePressure ...
I0804 00:36:20.318862 21140 start.go:241] waiting for startup goroutines ...
I0804 00:36:20.318882 21140 start.go:255] writing updated cluster config ...
I0804 00:36:20.319145 21140 ssh_runner.go:195] Run: rm -f paused
I0804 00:36:20.367562 21140 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
I0804 00:36:20.369783 21140 out.go:177] * Done! kubectl is now configured to use "ha-230158" cluster and "default" namespace by default
==> Docker <==
Aug 04 00:33:51 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/009a8093717e550676eeaf55e6e91ec382fed4759cb3cca76cd44e62049adf56/resolv.conf as [nameserver 192.168.122.1]"
Aug 04 00:33:51 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06fad541ab06ffd1f3e824b90aa63d710251f7fa87d56e35541370ada2f7553e/resolv.conf as [nameserver 192.168.122.1]"
Aug 04 00:33:51 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:33:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a62698d656ed8bc98a6334b4542ba4f5ecc61afc972b99c2e5ef586f1c88c14/resolv.conf as [nameserver 192.168.122.1]"
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015145680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015399497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015411491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.015513548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.041452513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.041717458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.041749290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.042276749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.105541332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.105752619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.105862987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:33:52 ha-230158 dockerd[1202]: time="2024-08-04T00:33:52.106364455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.812831312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.812977993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.815236704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:36:21 ha-230158 dockerd[1202]: time="2024-08-04T00:36:21.815385085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:36:21 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:36:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd1edd455378c9ebd00d93d6f0a55aab769884307524020f5bc39507f5df1acd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Aug 04 00:36:23 ha-230158 cri-dockerd[1092]: time="2024-08-04T00:36:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.331366306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.332077553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.332409422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 04 00:36:23 ha-230158 dockerd[1202]: time="2024-08-04T00:36:23.333025484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
69954bb3c52d0 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 3 minutes ago Running busybox 0 fd1edd455378c busybox-fc5497c4f-zkdbc
4507a06a5c525 cbb01a7bd410d 6 minutes ago Running coredns 0 7a62698d656ed coredns-7db6d8ff4d-xt2gb
6bf6de750968a cbb01a7bd410d 6 minutes ago Running coredns 0 009a8093717e5 coredns-7db6d8ff4d-cqbjc
7c239c3990b6e 6e38f40d628db 6 minutes ago Running storage-provisioner 0 06fad541ab06f storage-provisioner
210ee81e70d86 kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a 6 minutes ago Running kindnet-cni 0 9e15e12f7e085 kindnet-wfd5t
7cbd24fa0e03b 55bb025d2cfa5 6 minutes ago Running kube-proxy 0 5dde6fe74ac82 kube-proxy-vdn92
a95a3373ad39b ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f 6 minutes ago Running kube-vip 0 7a020dfa73795 kube-vip-ha-230158
88a839c50b4a3 3861cfcd7c04c 7 minutes ago Running etcd 0 888fe0699f5db etcd-ha-230158
91915a79609ad 1f6d574d502f3 7 minutes ago Running kube-apiserver 0 211ccf8ecbfd1 kube-apiserver-ha-230158
f1d34bc5f7153 3edc18e7b7672 7 minutes ago Running kube-scheduler 0 708cdd025b014 kube-scheduler-ha-230158
0493928ca9b85 76932a3b37d7e 7 minutes ago Running kube-controller-manager 0 8a83f286e0d46 kube-controller-manager-ha-230158
==> coredns [4507a06a5c52] <==
[INFO] 10.244.0.4:35608 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000124456s
[INFO] 10.244.0.4:51919 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002076181s
[INFO] 10.244.1.2:59825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117139s
[INFO] 10.244.1.2:57254 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00190594s
[INFO] 10.244.1.2:39544 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012697s
[INFO] 10.244.2.2:57787 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017108s
[INFO] 10.244.2.2:33350 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001178116s
[INFO] 10.244.2.2:49475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146562s
[INFO] 10.244.2.2:42126 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210258s
[INFO] 10.244.0.4:48459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124259s
[INFO] 10.244.0.4:35309 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155423s
[INFO] 10.244.0.4:52239 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040083s
[INFO] 10.244.1.2:38884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161915s
[INFO] 10.244.1.2:56249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000257048s
[INFO] 10.244.1.2:51787 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095902s
[INFO] 10.244.2.2:53443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092673s
[INFO] 10.244.2.2:43029 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092806s
[INFO] 10.244.2.2:40101 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083123s
[INFO] 10.244.0.4:53268 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093003s
[INFO] 10.244.0.4:43144 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082857s
[INFO] 10.244.1.2:53993 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215455s
[INFO] 10.244.2.2:54482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129926s
[INFO] 10.244.2.2:37912 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000164869s
[INFO] 10.244.0.4:42684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091614s
[INFO] 10.244.0.4:42052 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114145s
==> coredns [6bf6de750968] <==
[INFO] 10.244.1.2:34476 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003812454s
[INFO] 10.244.1.2:36251 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182778s
[INFO] 10.244.1.2:52606 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167278s
[INFO] 10.244.1.2:54930 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130115s
[INFO] 10.244.1.2:38743 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116259s
[INFO] 10.244.2.2:55784 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00189069s
[INFO] 10.244.2.2:45571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113702s
[INFO] 10.244.2.2:34311 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007235s
[INFO] 10.244.2.2:43608 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127033s
[INFO] 10.244.0.4:35071 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631133s
[INFO] 10.244.0.4:40853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044285s
[INFO] 10.244.0.4:53127 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00127252s
[INFO] 10.244.0.4:56586 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005511s
[INFO] 10.244.0.4:50880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004863s
[INFO] 10.244.1.2:58534 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101406s
[INFO] 10.244.2.2:36136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152059s
[INFO] 10.244.0.4:44755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115732s
[INFO] 10.244.0.4:36492 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200582s
[INFO] 10.244.1.2:34304 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137164s
[INFO] 10.244.1.2:57141 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207512s
[INFO] 10.244.1.2:33291 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000266329s
[INFO] 10.244.2.2:60551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213328s
[INFO] 10.244.2.2:39454 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134305s
[INFO] 10.244.0.4:55561 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063379s
[INFO] 10.244.0.4:49797 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109389s
==> describe nodes <==
Name: ha-230158
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-230158
kubernetes.io/os=linux
minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
minikube.k8s.io/name=ha-230158
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_04T00_33_21_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 04 Aug 2024 00:33:19 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-230158
AcquireTime: <unset>
RenewTime: Sun, 04 Aug 2024 00:40:10 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 04 Aug 2024 00:36:55 +0000 Sun, 04 Aug 2024 00:33:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 04 Aug 2024 00:36:55 +0000 Sun, 04 Aug 2024 00:33:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 04 Aug 2024 00:36:55 +0000 Sun, 04 Aug 2024 00:33:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 04 Aug 2024 00:36:55 +0000 Sun, 04 Aug 2024 00:33:51 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.132
Hostname: ha-230158
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: abc2ed2fdf234afab4b4880adb15e874
System UUID: abc2ed2f-df23-4afa-b4b4-880adb15e874
Boot ID: 2ca41502-5213-44ab-89e1-9b63019791e1
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-fc5497c4f-zkdbc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m54s
kube-system coredns-7db6d8ff4d-cqbjc 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 6m42s
kube-system coredns-7db6d8ff4d-xt2gb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 6m42s
kube-system etcd-ha-230158 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 6m55s
kube-system kindnet-wfd5t 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 6m42s
kube-system kube-apiserver-ha-230158 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m55s
kube-system kube-controller-manager-ha-230158 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m55s
kube-system kube-proxy-vdn92 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m42s
kube-system kube-scheduler-ha-230158 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m55s
kube-system kube-vip-ha-230158 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m55s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m41s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING)
memory 290Mi (13%!)(MISSING) 390Mi (18%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m38s kube-proxy
Normal Starting 6m55s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m55s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m55s kubelet Node ha-230158 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m55s kubelet Node ha-230158 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m55s kubelet Node ha-230158 status is now: NodeHasSufficientPID
Normal RegisteredNode 6m43s node-controller Node ha-230158 event: Registered Node ha-230158 in Controller
Normal NodeReady 6m24s kubelet Node ha-230158 status is now: NodeReady
Normal RegisteredNode 5m24s node-controller Node ha-230158 event: Registered Node ha-230158 in Controller
Normal RegisteredNode 4m4s node-controller Node ha-230158 event: Registered Node ha-230158 in Controller
Name: ha-230158-m02
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-230158-m02
kubernetes.io/os=linux
minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
minikube.k8s.io/name=ha-230158
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2024_08_04T00_34_35_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 04 Aug 2024 00:34:32 +0000
Taints: node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: ha-230158-m02
AcquireTime: <unset>
RenewTime: Sun, 04 Aug 2024 00:37:36 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure Unknown Sun, 04 Aug 2024 00:36:34 +0000 Sun, 04 Aug 2024 00:38:16 +0000 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Sun, 04 Aug 2024 00:36:34 +0000 Sun, 04 Aug 2024 00:38:16 +0000 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Sun, 04 Aug 2024 00:36:34 +0000 Sun, 04 Aug 2024 00:38:16 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Sun, 04 Aug 2024 00:36:34 +0000 Sun, 04 Aug 2024 00:38:16 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 192.168.39.188
Hostname: ha-230158-m02
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 155844a39b0945b984994c69c6243cc5
System UUID: 155844a3-9b09-45b9-8499-4c69c6243cc5
Boot ID: 071fc9b5-660b-440f-97f2-8a7bd3388cf4
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-fc5497c4f-v69qb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m54s
kube-system etcd-ha-230158-m02 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 5m40s
kube-system kindnet-n5cql 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 5m43s
kube-system kube-apiserver-ha-230158-m02 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m40s
kube-system kube-controller-manager-ha-230158-m02 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m40s
kube-system kube-proxy-8tgp2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m43s
kube-system kube-scheduler-ha-230158-m02 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m40s
kube-system kube-vip-ha-230158-m02 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m38s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 100m (5%!)(MISSING)
memory 150Mi (7%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m38s kube-proxy
Normal RegisteredNode 5m43s node-controller Node ha-230158-m02 event: Registered Node ha-230158-m02 in Controller
Normal NodeHasSufficientMemory 5m43s (x8 over 5m43s) kubelet Node ha-230158-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m43s (x8 over 5m43s) kubelet Node ha-230158-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m43s (x7 over 5m43s) kubelet Node ha-230158-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m43s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m24s node-controller Node ha-230158-m02 event: Registered Node ha-230158-m02 in Controller
Normal RegisteredNode 4m4s node-controller Node ha-230158-m02 event: Registered Node ha-230158-m02 in Controller
Normal NodeNotReady 119s node-controller Node ha-230158-m02 status is now: NodeNotReady
Name: ha-230158-m03
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-230158-m03
kubernetes.io/os=linux
minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
minikube.k8s.io/name=ha-230158
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2024_08_04T00_35_56_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 04 Aug 2024 00:35:52 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-230158-m03
AcquireTime: <unset>
RenewTime: Sun, 04 Aug 2024 00:40:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 04 Aug 2024 00:36:53 +0000 Sun, 04 Aug 2024 00:35:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 04 Aug 2024 00:36:53 +0000 Sun, 04 Aug 2024 00:35:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 04 Aug 2024 00:36:53 +0000 Sun, 04 Aug 2024 00:35:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 04 Aug 2024 00:36:53 +0000 Sun, 04 Aug 2024 00:36:13 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.35
Hostname: ha-230158-m03
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 5ae15621aea546b8af7efab921bf3880
System UUID: 5ae15621-aea5-46b8-af7e-fab921bf3880
Boot ID: bb1d0beb-bec0-4188-b329-44d539b745da
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-fc5497c4f-zdhsb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m54s
kube-system etcd-ha-230158-m03 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 4m20s
kube-system kindnet-w86v4 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 4m23s
kube-system kube-apiserver-ha-230158-m03 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m20s
kube-system kube-controller-manager-ha-230158-m03 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m19s
kube-system kube-proxy-llxx2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m23s
kube-system kube-scheduler-ha-230158-m03 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m19s
kube-system kube-vip-ha-230158-m03 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m18s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 100m (5%!)(MISSING)
memory 150Mi (7%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m18s kube-proxy
Normal RegisteredNode 4m23s node-controller Node ha-230158-m03 event: Registered Node ha-230158-m03 in Controller
Normal NodeHasSufficientMemory 4m23s (x8 over 4m23s) kubelet Node ha-230158-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m23s (x8 over 4m23s) kubelet Node ha-230158-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m23s (x7 over 4m23s) kubelet Node ha-230158-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m23s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 4m19s node-controller Node ha-230158-m03 event: Registered Node ha-230158-m03 in Controller
Normal RegisteredNode 4m4s node-controller Node ha-230158-m03 event: Registered Node ha-230158-m03 in Controller
Name: ha-230158-m04
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-230158-m04
kubernetes.io/os=linux
minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
minikube.k8s.io/name=ha-230158
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2024_08_04T00_37_02_0700
minikube.k8s.io/version=v1.33.1
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 04 Aug 2024 00:37:01 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-230158-m04
AcquireTime: <unset>
RenewTime: Sun, 04 Aug 2024 00:40:05 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 04 Aug 2024 00:37:32 +0000 Sun, 04 Aug 2024 00:37:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 04 Aug 2024 00:37:32 +0000 Sun, 04 Aug 2024 00:37:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 04 Aug 2024 00:37:32 +0000 Sun, 04 Aug 2024 00:37:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 04 Aug 2024 00:37:32 +0000 Sun, 04 Aug 2024 00:37:24 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.165
Hostname: ha-230158-m04
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 08c1193a347249feb43822416db43ec8
System UUID: 08c1193a-3472-49fe-b438-22416db43ec8
Boot ID: f2cdda26-4ab8-4a1c-82a1-33749eddad4c
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-6mhjl 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 3m14s
kube-system kube-proxy-b72ff 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m14s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m8s kube-proxy
Normal NodeHasSufficientMemory 3m14s (x2 over 3m14s) kubelet Node ha-230158-m04 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m14s (x2 over 3m14s) kubelet Node ha-230158-m04 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m14s (x2 over 3m14s) kubelet Node ha-230158-m04 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m14s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m13s node-controller Node ha-230158-m04 event: Registered Node ha-230158-m04 in Controller
Normal RegisteredNode 3m9s node-controller Node ha-230158-m04 event: Registered Node ha-230158-m04 in Controller
Normal RegisteredNode 3m9s node-controller Node ha-230158-m04 event: Registered Node ha-230158-m04 in Controller
Normal NodeReady 2m51s kubelet Node ha-230158-m04 status is now: NodeReady
==> dmesg <==
[ +4.583120] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +8.785040] systemd-fstab-generator[507]: Ignoring "noauto" option for root device
[ +0.060007] kauditd_printk_skb: 1 callbacks suppressed
[ +0.053342] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
[ +2.043378] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
[ +0.288136] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
[ +0.114492] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
[ +0.129550] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
[ +2.372169] kauditd_printk_skb: 223 callbacks suppressed
[ +0.111879] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
[ +0.114388] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
[Aug 4 00:33] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
[ +0.148135] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
[ +3.514231] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
[ +3.917614] kauditd_printk_skb: 132 callbacks suppressed
[ +0.499073] systemd-fstab-generator[1443]: Ignoring "noauto" option for root device
[ +3.989671] systemd-fstab-generator[1628]: Ignoring "noauto" option for root device
[ +0.588004] kauditd_printk_skb: 82 callbacks suppressed
[ +6.808944] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
[ +0.086854] kauditd_printk_skb: 53 callbacks suppressed
[ +15.709067] kauditd_printk_skb: 12 callbacks suppressed
[ +15.629676] kauditd_printk_skb: 38 callbacks suppressed
[Aug 4 00:34] kauditd_printk_skb: 26 callbacks suppressed
==> etcd [88a839c50b4a] <==
{"level":"warn","ts":"2024-08-04T00:39:47.819125Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:50.332294Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:50.332364Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:52.819873Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:52.819991Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:54.333882Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:54.334019Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:57.820335Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:57.82035Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:58.336814Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:39:58.336872Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:02.339032Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:02.33917Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:02.820879Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:02.821072Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:06.340975Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:06.341042Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:07.821517Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:07.82152Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:10.342847Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:10.342911Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:12.822671Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5875b509f8714909","rtt":"923.25µs","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:12.822713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5875b509f8714909","rtt":"10.471321ms","error":"dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:14.344971Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.188:2380/version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
{"level":"warn","ts":"2024-08-04T00:40:14.345082Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5875b509f8714909","error":"Get \"https://192.168.39.188:2380/version\": dial tcp 192.168.39.188:2380: connect: connection refused"}
==> kernel <==
00:40:15 up 7 min, 0 users, load average: 0.10, 0.30, 0.18
Linux ha-230158 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kindnet [210ee81e70d8] <==
I0804 00:39:41.033720 1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24]
I0804 00:39:51.040600 1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
I0804 00:39:51.040708 1 main.go:299] handling current node
I0804 00:39:51.040901 1 main.go:295] Handling node with IPs: map[192.168.39.188:{}]
I0804 00:39:51.042301 1 main.go:322] Node ha-230158-m02 has CIDR [10.244.1.0/24]
I0804 00:39:51.043369 1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
I0804 00:39:51.043537 1 main.go:322] Node ha-230158-m03 has CIDR [10.244.2.0/24]
I0804 00:39:51.043799 1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
I0804 00:39:51.043948 1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24]
I0804 00:40:01.040357 1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
I0804 00:40:01.040525 1 main.go:299] handling current node
I0804 00:40:01.040717 1 main.go:295] Handling node with IPs: map[192.168.39.188:{}]
I0804 00:40:01.040857 1 main.go:322] Node ha-230158-m02 has CIDR [10.244.1.0/24]
I0804 00:40:01.041402 1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
I0804 00:40:01.041491 1 main.go:322] Node ha-230158-m03 has CIDR [10.244.2.0/24]
I0804 00:40:01.041820 1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
I0804 00:40:01.041897 1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24]
I0804 00:40:11.040357 1 main.go:295] Handling node with IPs: map[192.168.39.132:{}]
I0804 00:40:11.040689 1 main.go:299] handling current node
I0804 00:40:11.040861 1 main.go:295] Handling node with IPs: map[192.168.39.188:{}]
I0804 00:40:11.040984 1 main.go:322] Node ha-230158-m02 has CIDR [10.244.1.0/24]
I0804 00:40:11.041346 1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
I0804 00:40:11.041520 1 main.go:322] Node ha-230158-m03 has CIDR [10.244.2.0/24]
I0804 00:40:11.041859 1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
I0804 00:40:11.041961 1 main.go:322] Node ha-230158-m04 has CIDR [10.244.3.0/24]
==> kube-apiserver [91915a79609a] <==
W0804 00:33:19.303852 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132]
I0804 00:33:19.304903 1 controller.go:615] quota admission added evaluator for: endpoints
I0804 00:33:19.309111 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0804 00:33:19.636661 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0804 00:33:20.409356 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0804 00:33:20.425521 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0804 00:33:20.605107 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0804 00:33:33.657031 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0804 00:33:33.795963 1 controller.go:615] quota admission added evaluator for: replicasets.apps
E0804 00:36:24.892149 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44516: use of closed network connection
E0804 00:36:25.084585 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44540: use of closed network connection
E0804 00:36:25.285993 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44556: use of closed network connection
E0804 00:36:25.488542 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44566: use of closed network connection
E0804 00:36:25.679566 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44574: use of closed network connection
E0804 00:36:25.875505 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44584: use of closed network connection
E0804 00:36:26.062795 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44608: use of closed network connection
E0804 00:36:26.247872 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44616: use of closed network connection
E0804 00:36:26.428236 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44648: use of closed network connection
E0804 00:36:26.718145 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44672: use of closed network connection
E0804 00:36:26.897778 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44688: use of closed network connection
E0804 00:36:27.074024 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44712: use of closed network connection
E0804 00:36:27.253156 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44734: use of closed network connection
E0804 00:36:27.435418 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44748: use of closed network connection
E0804 00:36:27.627361 1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44774: use of closed network connection
W0804 00:37:59.317631 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.132 192.168.39.35]
==> kube-controller-manager [0493928ca9b8] <==
I0804 00:35:52.255897 1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-230158-m03\" does not exist"
I0804 00:35:52.270306 1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-230158-m03" podCIDRs=["10.244.2.0/24"]
I0804 00:35:52.930338 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-230158-m03"
I0804 00:36:21.334116 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.738104ms"
I0804 00:36:21.475587 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.412616ms"
I0804 00:36:21.658949 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.297753ms"
I0804 00:36:21.754512 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.003932ms"
I0804 00:36:21.783367 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.741619ms"
I0804 00:36:21.785936 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="213.267µs"
I0804 00:36:22.186175 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.336µs"
I0804 00:36:22.390963 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.47µs"
I0804 00:36:23.973461 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.54516ms"
I0804 00:36:23.976453 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.983µs"
I0804 00:36:24.303316 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.154086ms"
I0804 00:36:24.303449 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.971µs"
I0804 00:36:24.382687 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.219908ms"
I0804 00:36:24.383084 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="326.263µs"
E0804 00:37:01.802121 1 certificate_controller.go:146] Sync csr-j7r77 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j7r77": the object has been modified; please apply your changes to the latest version and try again
I0804 00:37:01.906129 1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-230158-m04\" does not exist"
I0804 00:37:01.938883 1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-230158-m04" podCIDRs=["10.244.3.0/24"]
I0804 00:37:02.944138 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-230158-m04"
I0804 00:37:24.043089 1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-230158-m04"
I0804 00:38:16.317984 1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-230158-m04"
I0804 00:38:16.484769 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.211974ms"
I0804 00:38:16.486491 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.623755ms"
==> kube-proxy [7cbd24fa0e03] <==
I0804 00:33:36.359628 1 server_linux.go:69] "Using iptables proxy"
I0804 00:33:36.382157 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
I0804 00:33:36.420473 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0804 00:33:36.420529 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0804 00:33:36.420546 1 server_linux.go:165] "Using iptables Proxier"
I0804 00:33:36.423911 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0804 00:33:36.424454 1 server.go:872] "Version info" version="v1.30.3"
I0804 00:33:36.424486 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0804 00:33:36.426533 1 config.go:192] "Starting service config controller"
I0804 00:33:36.426577 1 shared_informer.go:313] Waiting for caches to sync for service config
I0804 00:33:36.426790 1 config.go:101] "Starting endpoint slice config controller"
I0804 00:33:36.426816 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0804 00:33:36.427838 1 config.go:319] "Starting node config controller"
I0804 00:33:36.427873 1 shared_informer.go:313] Waiting for caches to sync for node config
I0804 00:33:36.527744 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0804 00:33:36.527987 1 shared_informer.go:320] Caches are synced for node config
I0804 00:33:36.528032 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [f1d34bc5f715] <==
E0804 00:33:18.876335 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0804 00:33:18.887399 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0804 00:33:18.887446 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0804 00:33:18.961222 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0804 00:33:18.961583 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0804 00:33:22.154237 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0804 00:36:21.238944 1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="cecb795b-aea8-4fed-acac-e99420ca5cf5" pod="default/busybox-fc5497c4f-v69qb" assumedNode="ha-230158-m02" currentNode="ha-230158-m03"
E0804 00:36:21.268890 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v69qb\": pod busybox-fc5497c4f-v69qb is already assigned to node \"ha-230158-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v69qb" node="ha-230158-m03"
E0804 00:36:21.269285 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cecb795b-aea8-4fed-acac-e99420ca5cf5(default/busybox-fc5497c4f-v69qb) was assumed on ha-230158-m03 but assigned to ha-230158-m02" pod="default/busybox-fc5497c4f-v69qb"
E0804 00:36:21.269410 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v69qb\": pod busybox-fc5497c4f-v69qb is already assigned to node \"ha-230158-m02\"" pod="default/busybox-fc5497c4f-v69qb"
I0804 00:36:21.269622 1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v69qb" node="ha-230158-m02"
E0804 00:36:21.339592 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zkdbc\": pod busybox-fc5497c4f-zkdbc is already assigned to node \"ha-230158\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zkdbc" node="ha-230158"
E0804 00:36:21.339732 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zkdbc\": pod busybox-fc5497c4f-zkdbc is already assigned to node \"ha-230158\"" pod="default/busybox-fc5497c4f-zkdbc"
E0804 00:37:01.982089 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b72ff\": pod kube-proxy-b72ff is already assigned to node \"ha-230158-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b72ff" node="ha-230158-m04"
E0804 00:37:01.982261 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 17bd64b5-f602-4fdd-aa52-bd291dd235af(kube-system/kube-proxy-b72ff) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b72ff"
E0804 00:37:01.982285 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b72ff\": pod kube-proxy-b72ff is already assigned to node \"ha-230158-m04\"" pod="kube-system/kube-proxy-b72ff"
I0804 00:37:01.982510 1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b72ff" node="ha-230158-m04"
E0804 00:37:01.983251 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6mhjl\": pod kindnet-6mhjl is already assigned to node \"ha-230158-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6mhjl" node="ha-230158-m04"
E0804 00:37:01.983297 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dd391304-a440-45ab-9a55-92422404c4ec(kube-system/kindnet-6mhjl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6mhjl"
E0804 00:37:01.983312 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6mhjl\": pod kindnet-6mhjl is already assigned to node \"ha-230158-m04\"" pod="kube-system/kindnet-6mhjl"
I0804 00:37:01.983325 1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6mhjl" node="ha-230158-m04"
E0804 00:37:02.011560 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nvzl4\": pod kube-proxy-nvzl4 is already assigned to node \"ha-230158-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nvzl4" node="ha-230158-m04"
E0804 00:37:02.011644 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d59b9e00-f5ee-45a6-ad39-ae31e276f650(kube-system/kube-proxy-nvzl4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nvzl4"
E0804 00:37:02.011947 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nvzl4\": pod kube-proxy-nvzl4 is already assigned to node \"ha-230158-m04\"" pod="kube-system/kube-proxy-nvzl4"
I0804 00:37:02.012283 1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nvzl4" node="ha-230158-m04"
==> kubelet <==
Aug 04 00:35:20 ha-230158 kubelet[2131]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 04 00:35:20 ha-230158 kubelet[2131]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 04 00:35:20 ha-230158 kubelet[2131]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 04 00:36:20 ha-230158 kubelet[2131]: E0804 00:36:20.555920 2131 iptables.go:577] "Could not set up iptables canary" err=<
Aug 04 00:36:20 ha-230158 kubelet[2131]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 04 00:36:20 ha-230158 kubelet[2131]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 04 00:36:20 ha-230158 kubelet[2131]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 04 00:36:20 ha-230158 kubelet[2131]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 04 00:36:21 ha-230158 kubelet[2131]: I0804 00:36:21.341473 2131 topology_manager.go:215] "Topology Admit Handler" podUID="b9e7a29f-edd4-4541-8e8e-05d5d0c41d28" podNamespace="default" podName="busybox-fc5497c4f-zkdbc"
Aug 04 00:36:21 ha-230158 kubelet[2131]: I0804 00:36:21.449454 2131 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csj9x\" (UniqueName: \"kubernetes.io/projected/b9e7a29f-edd4-4541-8e8e-05d5d0c41d28-kube-api-access-csj9x\") pod \"busybox-fc5497c4f-zkdbc\" (UID: \"b9e7a29f-edd4-4541-8e8e-05d5d0c41d28\") " pod="default/busybox-fc5497c4f-zkdbc"
Aug 04 00:37:20 ha-230158 kubelet[2131]: E0804 00:37:20.563718 2131 iptables.go:577] "Could not set up iptables canary" err=<
Aug 04 00:37:20 ha-230158 kubelet[2131]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 04 00:37:20 ha-230158 kubelet[2131]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 04 00:37:20 ha-230158 kubelet[2131]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 04 00:37:20 ha-230158 kubelet[2131]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 04 00:38:20 ha-230158 kubelet[2131]: E0804 00:38:20.550944 2131 iptables.go:577] "Could not set up iptables canary" err=<
Aug 04 00:38:20 ha-230158 kubelet[2131]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 04 00:38:20 ha-230158 kubelet[2131]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 04 00:38:20 ha-230158 kubelet[2131]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 04 00:38:20 ha-230158 kubelet[2131]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 04 00:39:20 ha-230158 kubelet[2131]: E0804 00:39:20.558349 2131 iptables.go:577] "Could not set up iptables canary" err=<
Aug 04 00:39:20 ha-230158 kubelet[2131]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 04 00:39:20 ha-230158 kubelet[2131]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 04 00:39:20 ha-230158 kubelet[2131]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 04 00:39:20 ha-230158 kubelet[2131]: > table="nat" chain="KUBE-KUBELET-CANARY"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-230158 -n ha-230158
helpers_test.go:261: (dbg) Run: kubectl --context ha-230158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (138.13s)