Test Report: Hyperkit_macOS 18757

                    
                      76fd79497ca7607997860d279d48d970ddc3ee52:2024-04-25:34200
                    
                

Test fail (10/227)

x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (227.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0425 11:58:34.107245    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2m32.543227451s)
ha_test.go:413: expected profile "ha-703000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-703000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-703000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-703000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.9\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":fa
lse,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":f
alse,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-703000 -n ha-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-703000 -n ha-703000: exit status 3 (1m15.097753397s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:02:16.351127    4243 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0425 12:02:16.351150    4243 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-703000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (227.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (227.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0425 12:08:34.103123    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2m32.422573929s)
ha_test.go:413: expected profile "ha-703000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-703000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-703000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-703000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.9\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":fa
lse,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":f
alse,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-703000 -n ha-703000
E0425 12:11:37.167828    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-703000 -n ha-703000: exit status 3 (1m15.098595638s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:12:12.108217    4544 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0425 12:12:12.108231    4544 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-703000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (227.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (165.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-703000 --control-plane -v=7 --alsologtostderr
E0425 12:12:26.097571    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 12:13:34.106801    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-703000 --control-plane -v=7 --alsologtostderr: signal: killed (1m30.63813566s)

                                                
                                                
** stderr ** 
	I0425 12:12:12.170362    4592 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:12:12.171113    4592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:12:12.171122    4592 out.go:304] Setting ErrFile to fd 2...
	I0425 12:12:12.171127    4592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:12:12.171652    4592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:12:12.172026    4592 mustload.go:65] Loading cluster: ha-703000
	I0425 12:12:12.172316    4592 config.go:182] Loaded profile config "ha-703000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:12:12.172653    4592 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:12:12.172710    4592 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:12:12.180970    4592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51930
	I0425 12:12:12.181356    4592 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:12:12.181740    4592 main.go:141] libmachine: Using API Version  1
	I0425 12:12:12.181748    4592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:12:12.181972    4592 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:12:12.182078    4592 main.go:141] libmachine: (ha-703000) Calling .GetState
	I0425 12:12:12.182159    4592 main.go:141] libmachine: (ha-703000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:12:12.182243    4592 main.go:141] libmachine: (ha-703000) DBG | hyperkit pid from json: 4363
	I0425 12:12:12.183193    4592 host.go:66] Checking if "ha-703000" exists ...
	I0425 12:12:12.183442    4592 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:12:12.183468    4592 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:12:12.191931    4592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0425 12:12:12.192243    4592 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:12:12.192570    4592 main.go:141] libmachine: Using API Version  1
	I0425 12:12:12.192580    4592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:12:12.192792    4592 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:12:12.192900    4592 main.go:141] libmachine: (ha-703000) Calling .DriverName
	I0425 12:12:12.193248    4592 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:12:12.193268    4592 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:12:12.201519    4592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I0425 12:12:12.201848    4592 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:12:12.202183    4592 main.go:141] libmachine: Using API Version  1
	I0425 12:12:12.202191    4592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:12:12.202406    4592 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:12:12.202530    4592 main.go:141] libmachine: (ha-703000-m02) Calling .GetState
	I0425 12:12:12.202612    4592 main.go:141] libmachine: (ha-703000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:12:12.202693    4592 main.go:141] libmachine: (ha-703000-m02) DBG | hyperkit pid from json: 4376
	I0425 12:12:12.203653    4592 host.go:66] Checking if "ha-703000-m02" exists ...
	I0425 12:12:12.203912    4592 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:12:12.203949    4592 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:12:12.212350    4592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0425 12:12:12.212715    4592 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:12:12.213081    4592 main.go:141] libmachine: Using API Version  1
	I0425 12:12:12.213103    4592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:12:12.213309    4592 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:12:12.213418    4592 main.go:141] libmachine: (ha-703000-m02) Calling .DriverName
	I0425 12:12:12.213524    4592 api_server.go:166] Checking apiserver status ...
	I0425 12:12:12.213578    4592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:12:12.213614    4592 main.go:141] libmachine: (ha-703000) Calling .GetSSHHostname
	I0425 12:12:12.213723    4592 main.go:141] libmachine: (ha-703000) Calling .GetSSHPort
	I0425 12:12:12.213820    4592 main.go:141] libmachine: (ha-703000) Calling .GetSSHKeyPath
	I0425 12:12:12.213911    4592 main.go:141] libmachine: (ha-703000) Calling .GetSSHUsername
	I0425 12:12:12.213997    4592 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/ha-703000/id_rsa Username:docker}
	W0425 12:13:27.216418    4592 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.6:22: connect: operation timed out
	W0425 12:13:27.216516    4592 api_server.go:170] stopped: unable to get apiserver pid: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	W0425 12:13:27.216733    4592 out.go:239] ! The control-plane node ha-703000 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-703000 apiserver is not running (will try others): (state=Stopped)
	I0425 12:13:27.216744    4592 api_server.go:166] Checking apiserver status ...
	I0425 12:13:27.216802    4592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:13:27.216819    4592 main.go:141] libmachine: (ha-703000-m02) Calling .GetSSHHostname
	I0425 12:13:27.216982    4592 main.go:141] libmachine: (ha-703000-m02) Calling .GetSSHPort
	I0425 12:13:27.217120    4592 main.go:141] libmachine: (ha-703000-m02) Calling .GetSSHKeyPath
	I0425 12:13:27.217232    4592 main.go:141] libmachine: (ha-703000-m02) Calling .GetSSHUsername
	I0425 12:13:27.217329    4592 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/ha-703000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-703000 --control-plane -v=7 --alsologtostderr" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-703000 -n ha-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-703000 -n ha-703000: exit status 3 (1m15.099748817s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:14:57.848616    4669 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0425 12:14:57.848633    4669 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-703000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (165.74s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (76.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-730000 --driver=hyperkit 
E0425 12:17:26.101602    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p image-730000 --driver=hyperkit : exit status 90 (1m16.728782927s)

                                                
                                                
-- stdout --
	* [image-730000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "image-730000" primary control-plane node in "image-730000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 25 19:16:44 image-730000 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:16:44 image-730000 dockerd[516]: time="2024-04-25T19:16:44.814224463Z" level=info msg="Starting up"
	Apr 25 19:16:44 image-730000 dockerd[516]: time="2024-04-25T19:16:44.814706396Z" level=info msg="containerd not running, starting managed containerd"
	Apr 25 19:16:44 image-730000 dockerd[516]: time="2024-04-25T19:16:44.815183106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=522
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.831780340Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.845819133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.845879007Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.845941486Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.845976593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846067493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846109449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846252605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846294053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846333152Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846363423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846443938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.846647665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.848757365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.848812975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.848948148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.848991026Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.849078422Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.849144621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.849178980Z" level=info msg="metadata content store policy set" policy=shared
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.892912587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.893010844Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.893059783Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.893152584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.893197411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.893296262Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.896990842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897200426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897300153Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897349046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897363635Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897373770Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897382562Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897419261Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897472376Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897547140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897558409Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897566386Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897580279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897589680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897597921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897607168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897615751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897683675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897693894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897702861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897711623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897724074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897732075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897739598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897747552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897794628Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897833388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897844923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897853279Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897933526Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897968625Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897978396Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.897985569Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898072867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898104101Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898113483Z" level=info msg="NRI interface is disabled by configuration."
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898323811Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898383624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898550832Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 25 19:16:44 image-730000 dockerd[522]: time="2024-04-25T19:16:44.898586051Z" level=info msg="containerd successfully booted in 0.068669s"
	Apr 25 19:16:45 image-730000 dockerd[516]: time="2024-04-25T19:16:45.849617701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 25 19:16:45 image-730000 dockerd[516]: time="2024-04-25T19:16:45.854136446Z" level=info msg="Loading containers: start."
	Apr 25 19:16:45 image-730000 dockerd[516]: time="2024-04-25T19:16:45.975003363Z" level=info msg="Loading containers: done."
	Apr 25 19:16:45 image-730000 dockerd[516]: time="2024-04-25T19:16:45.985481574Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 25 19:16:45 image-730000 dockerd[516]: time="2024-04-25T19:16:45.985606947Z" level=info msg="Daemon has completed initialization"
	Apr 25 19:16:46 image-730000 dockerd[516]: time="2024-04-25T19:16:46.015808485Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 25 19:16:46 image-730000 systemd[1]: Started Docker Application Container Engine.
	Apr 25 19:16:46 image-730000 dockerd[516]: time="2024-04-25T19:16:46.015919883Z" level=info msg="API listen on [::]:2376"
	Apr 25 19:16:47 image-730000 dockerd[516]: time="2024-04-25T19:16:47.028375397Z" level=info msg="Processing signal 'terminated'"
	Apr 25 19:16:47 image-730000 dockerd[516]: time="2024-04-25T19:16:47.029295747Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 25 19:16:47 image-730000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 25 19:16:47 image-730000 dockerd[516]: time="2024-04-25T19:16:47.029583428Z" level=info msg="Daemon shutdown complete"
	Apr 25 19:16:47 image-730000 dockerd[516]: time="2024-04-25T19:16:47.029624907Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 25 19:16:47 image-730000 dockerd[516]: time="2024-04-25T19:16:47.029742551Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 25 19:16:48 image-730000 systemd[1]: docker.service: Deactivated successfully.
	Apr 25 19:16:48 image-730000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 25 19:16:48 image-730000 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:16:48 image-730000 dockerd[870]: time="2024-04-25T19:16:48.089301466Z" level=info msg="Starting up"
	Apr 25 19:17:48 image-730000 dockerd[870]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 25 19:17:48 image-730000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 25 19:17:48 image-730000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 25 19:17:48 image-730000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-amd64 start -p image-730000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p image-730000 -n image-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p image-730000 -n image-730000: exit status 6 (160.815967ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:17:48.212244    4845 status.go:417] kubeconfig endpoint: get endpoint: "image-730000" does not appear in /Users/jenkins/minikube-integration/18757-1425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-730000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (76.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (127.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 node start m03 -v=7 --alsologtostderr: exit status 90 (1m17.637445758s)

                                                
                                                
-- stdout --
	* Starting "multinode-034000-m03" worker node in "multinode-034000" cluster
	* Restarting existing hyperkit VM for "multinode-034000-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:25:56.920458    5605 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:25:56.920822    5605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:25:56.920828    5605 out.go:304] Setting ErrFile to fd 2...
	I0425 12:25:56.920831    5605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:25:56.921018    5605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:25:56.921377    5605 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:25:56.921681    5605 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:25:56.922045    5605 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.922085    5605 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.930424    5605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52804
	I0425 12:25:56.930867    5605 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.931305    5605 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.931337    5605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.931603    5605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.931722    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:25:56.931819    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:56.931883    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5383
	I0425 12:25:56.933041    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5383 missing from process table
	W0425 12:25:56.933095    5605 host.go:58] "multinode-034000-m03" host status: Stopped
	I0425 12:25:56.954924    5605 out.go:177] * Starting "multinode-034000-m03" worker node in "multinode-034000" cluster
	I0425 12:25:56.975543    5605 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:25:56.975592    5605 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:25:56.975609    5605 cache.go:56] Caching tarball of preloaded images
	I0425 12:25:56.975781    5605 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:25:56.975797    5605 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:25:56.975927    5605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:25:56.976470    5605 start.go:360] acquireMachinesLock for multinode-034000-m03: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:25:56.976552    5605 start.go:364] duration metric: took 55.55µs to acquireMachinesLock for "multinode-034000-m03"
	I0425 12:25:56.976571    5605 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:25:56.976583    5605 fix.go:54] fixHost starting: m03
	I0425 12:25:56.976879    5605 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.976899    5605 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.985353    5605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52806
	I0425 12:25:56.985703    5605 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.986037    5605 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.986048    5605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.986250    5605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.986362    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:25:56.986453    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:25:56.986529    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:56.986601    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5383
	I0425 12:25:56.987834    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5383 missing from process table
	I0425 12:25:56.987888    5605 fix.go:112] recreateIfNeeded on multinode-034000-m03: state=Stopped err=<nil>
	I0425 12:25:56.987911    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	W0425 12:25:56.987996    5605 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 12:25:57.009418    5605 out.go:177] * Restarting existing hyperkit VM for "multinode-034000-m03" ...
	I0425 12:25:57.030351    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .Start
	I0425 12:25:57.030529    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:57.030556    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid
	I0425 12:25:57.030599    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Using UUID c849e54d-01ec-4b42-86e6-91828949bf04
	I0425 12:25:57.059087    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Generated MAC aa:be:2a:d5:f9:e
	I0425 12:25:57.059119    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:25:57.059316    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c849e54d-01ec-4b42-86e6-91828949bf04", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fefc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:25:57.059402    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c849e54d-01ec-4b42-86e6-91828949bf04", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fefc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:25:57.059485    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c849e54d-01ec-4b42-86e6-91828949bf04", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/multinode-034000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage,/Users/j
enkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:25:57.059548    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c849e54d-01ec-4b42-86e6-91828949bf04 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/multinode-034000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/mult
inode-034000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:25:57.059592    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:25:57.061161    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Pid is 5609
	I0425 12:25:57.061728    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Attempt 0
	I0425 12:25:57.061759    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:57.061836    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:25:57.064633    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Searching for aa:be:2a:d5:f9:e in /var/db/dhcpd_leases ...
	I0425 12:25:57.064698    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0425 12:25:57.064714    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:aa:be:2a:d5:f9:e ID:1,aa:be:2a:d5:f9:e Lease:0x662aae43}
	I0425 12:25:57.064730    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Found match: aa:be:2a:d5:f9:e
	I0425 12:25:57.064752    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | IP: 192.169.0.18
	I0425 12:25:57.064874    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetConfigRaw
	I0425 12:25:57.065530    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:25:57.065796    5605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:25:57.066577    5605 machine.go:94] provisionDockerMachine start ...
	I0425 12:25:57.066598    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:25:57.066796    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:25:57.066935    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:25:57.067073    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:25:57.067186    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:25:57.067281    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:25:57.067479    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:25:57.067694    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:25:57.067704    5605 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 12:25:57.070233    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:25:57.078909    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:25:57.079908    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:25:57.079933    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:25:57.079947    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:25:57.079969    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:25:57.464116    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:25:57.464130    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:25:57.578995    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:25:57.579015    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:25:57.579024    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:25:57.579033    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:25:57.579854    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:25:57.579864    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:26:02.895465    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0425 12:26:02.895481    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0425 12:26:02.895491    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0425 12:26:02.919023    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0425 12:26:10.241764    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 12:26:10.241793    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
	I0425 12:26:10.241942    5605 buildroot.go:166] provisioning hostname "multinode-034000-m03"
	I0425 12:26:10.241953    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
	I0425 12:26:10.242046    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.242131    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:10.242223    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.242314    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.242405    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:10.242522    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:26:10.242673    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:26:10.242682    5605 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000-m03 && echo "multinode-034000-m03" | sudo tee /etc/hostname
	I0425 12:26:10.316663    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000-m03
	
	I0425 12:26:10.316682    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.316818    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:10.316914    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.317004    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.317118    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:10.317239    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:26:10.317379    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:26:10.317392    5605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:26:10.388211    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:26:10.388231    5605 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:26:10.388245    5605 buildroot.go:174] setting up certificates
	I0425 12:26:10.388253    5605 provision.go:84] configureAuth start
	I0425 12:26:10.388260    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
	I0425 12:26:10.388394    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:26:10.388487    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.388566    5605 provision.go:143] copyHostCerts
	I0425 12:26:10.388600    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:26:10.388669    5605 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:26:10.388678    5605 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:26:10.388829    5605 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:26:10.389063    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:26:10.389106    5605 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:26:10.389113    5605 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:26:10.389212    5605 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:26:10.389353    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:26:10.389398    5605 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:26:10.389403    5605 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:26:10.389489    5605 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:26:10.389632    5605 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000-m03 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-034000-m03]
	I0425 12:26:10.569011    5605 provision.go:177] copyRemoteCerts
	I0425 12:26:10.569077    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:26:10.569096    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.569236    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:10.569330    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.569422    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:10.569515    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:26:10.607354    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:26:10.607430    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0425 12:26:10.626767    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:26:10.626831    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 12:26:10.646498    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:26:10.646563    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:26:10.666096    5605 provision.go:87] duration metric: took 277.822029ms to configureAuth
	I0425 12:26:10.666109    5605 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:26:10.666281    5605 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:26:10.666300    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:26:10.666430    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.666539    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:10.666623    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.666713    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.666802    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:10.666914    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:26:10.667046    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:26:10.667054    5605 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:26:10.731990    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:26:10.732003    5605 buildroot.go:70] root file system type: tmpfs
	I0425 12:26:10.732079    5605 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:26:10.732112    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.732246    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:10.732340    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.732424    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.732504    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:10.732619    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:26:10.732751    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:26:10.732796    5605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:26:10.809770    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:26:10.809796    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:10.809953    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:10.810055    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.810139    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:10.810228    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:10.810355    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:26:10.810504    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:26:10.810516    5605 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:26:12.371824    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:26:12.371841    5605 machine.go:97] duration metric: took 15.304791754s to provisionDockerMachine
	I0425 12:26:12.371851    5605 start.go:293] postStartSetup for "multinode-034000-m03" (driver="hyperkit")
	I0425 12:26:12.371859    5605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:26:12.371868    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:26:12.372057    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:26:12.372071    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:12.372167    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:12.372262    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:12.372342    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:12.372415    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:26:12.411023    5605 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:26:12.414121    5605 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:26:12.414134    5605 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:26:12.414273    5605 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:26:12.414450    5605 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:26:12.414462    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:26:12.414681    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:26:12.421764    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:26:12.441656    5605 start.go:296] duration metric: took 69.793532ms for postStartSetup
	I0425 12:26:12.441680    5605 fix.go:56] duration metric: took 15.464636812s for fixHost
	I0425 12:26:12.441693    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:12.441822    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:12.441923    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:12.442006    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:12.442111    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:12.442227    5605 main.go:141] libmachine: Using SSH client type: native
	I0425 12:26:12.442377    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:26:12.442388    5605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0425 12:26:12.511764    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073172.617616605
	
	I0425 12:26:12.511775    5605 fix.go:216] guest clock: 1714073172.617616605
	I0425 12:26:12.511781    5605 fix.go:229] Guest: 2024-04-25 12:26:12.617616605 -0700 PDT Remote: 2024-04-25 12:26:12.441684 -0700 PDT m=+15.564536883 (delta=175.932605ms)
	I0425 12:26:12.511802    5605 fix.go:200] guest clock delta is within tolerance: 175.932605ms
	I0425 12:26:12.511807    5605 start.go:83] releasing machines lock for "multinode-034000-m03", held for 15.534781963s
	I0425 12:26:12.511827    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:26:12.511955    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:26:12.512050    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:26:12.512350    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:26:12.512448    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:26:12.512526    5605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:26:12.512560    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:12.512604    5605 ssh_runner.go:195] Run: systemctl --version
	I0425 12:26:12.512614    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:26:12.512653    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:12.512697    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:26:12.512731    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:12.512782    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:26:12.512804    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:12.512881    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:26:12.512900    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:26:12.512991    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:26:12.553047    5605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 12:26:12.603557    5605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:26:12.603647    5605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:26:12.616926    5605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:26:12.616940    5605 start.go:494] detecting cgroup driver to use...
	I0425 12:26:12.617048    5605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:26:12.631943    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:26:12.641237    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:26:12.650173    5605 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:26:12.650238    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:26:12.659312    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:26:12.668294    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:26:12.677047    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:26:12.686036    5605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:26:12.695079    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:26:12.704171    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:26:12.715711    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:26:12.724771    5605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:26:12.732865    5605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:26:12.741005    5605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:26:12.843167    5605 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:26:12.862413    5605 start.go:494] detecting cgroup driver to use...
	I0425 12:26:12.862496    5605 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:26:12.879363    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:26:12.890124    5605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:26:12.913617    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:26:12.924863    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:26:12.934685    5605 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:26:12.956737    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:26:12.967290    5605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:26:12.982168    5605 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:26:12.985062    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:26:12.992162    5605 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:26:13.005653    5605 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:26:13.106648    5605 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:26:13.239670    5605 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:26:13.239753    5605 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:26:13.253753    5605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:26:13.347526    5605 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:27:14.390509    5605 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.041128879s)
	I0425 12:27:14.390590    5605 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0425 12:27:14.425758    5605 out.go:177] 
	W0425 12:27:14.447245    5605 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 25 19:26:11 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.253889099Z" level=info msg="Starting up"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.254513791Z" level=info msg="containerd not running, starting managed containerd"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.255040925Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=509
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.273744425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288734282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288813397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288878095Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288914111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289122936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289174770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289315579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289360669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289392184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289420988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289570778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289823017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291537592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291595364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291733876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291776432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291909120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291976491Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.292008486Z" level=info msg="metadata content store policy set" policy=shared
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293399881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293458978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293496160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293531988Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293563578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293630728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293914853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293999712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294037107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294068059Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294098693Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294129466Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294159331Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294189892Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294226520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294257577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294287263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294316964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294352107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294383048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294416304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294446337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294475349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294504825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294533936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294562715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294593483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294623806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294654923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294684103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294713006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294746389Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294781308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294811069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294840197Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294913300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294956744Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294986675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295082452Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295148848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295182548Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295210008Z" level=info msg="NRI interface is disabled by configuration."
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295410075Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295497093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295583473Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295654988Z" level=info msg="containerd successfully booted in 0.022535s"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.271264701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.287531973Z" level=info msg="Loading containers: start."
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.410873971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.447119268Z" level=info msg="Loading containers: done."
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457562430Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457726187Z" level=info msg="Daemon has completed initialization"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476547218Z" level=info msg="API listen on [::]:2376"
	Apr 25 19:26:12 multinode-034000-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476664959Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.466160161Z" level=info msg="Processing signal 'terminated'"
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467050820Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467221822Z" level=info msg="Daemon shutdown complete"
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467361507Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467377635Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 25 19:26:13 multinode-034000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 25 19:26:14 multinode-034000-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:26:14 multinode-034000-m03 dockerd[829]: time="2024-04-25T19:26:14.521860774Z" level=info msg="Starting up"
	Apr 25 19:27:14 multinode-034000-m03 dockerd[829]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 25 19:27:14 multinode-034000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 25 19:26:11 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.253889099Z" level=info msg="Starting up"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.254513791Z" level=info msg="containerd not running, starting managed containerd"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.255040925Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=509
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.273744425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288734282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288813397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288878095Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288914111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289122936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289174770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289315579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289360669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289392184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289420988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289570778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289823017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291537592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291595364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291733876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291776432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291909120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291976491Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.292008486Z" level=info msg="metadata content store policy set" policy=shared
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293399881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293458978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293496160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293531988Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293563578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293630728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293914853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293999712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294037107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294068059Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294098693Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294129466Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294159331Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294189892Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294226520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294257577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294287263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294316964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294352107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294383048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294416304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294446337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294475349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294504825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294533936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294562715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294593483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294623806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294654923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294684103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294713006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294746389Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294781308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294811069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294840197Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294913300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294956744Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294986675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295082452Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295148848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295182548Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295210008Z" level=info msg="NRI interface is disabled by configuration."
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295410075Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295497093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295583473Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295654988Z" level=info msg="containerd successfully booted in 0.022535s"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.271264701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.287531973Z" level=info msg="Loading containers: start."
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.410873971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.447119268Z" level=info msg="Loading containers: done."
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457562430Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457726187Z" level=info msg="Daemon has completed initialization"
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476547218Z" level=info msg="API listen on [::]:2376"
	Apr 25 19:26:12 multinode-034000-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476664959Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.466160161Z" level=info msg="Processing signal 'terminated'"
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467050820Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467221822Z" level=info msg="Daemon shutdown complete"
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467361507Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467377635Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 25 19:26:13 multinode-034000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 25 19:26:14 multinode-034000-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:26:14 multinode-034000-m03 dockerd[829]: time="2024-04-25T19:26:14.521860774Z" level=info msg="Starting up"
	Apr 25 19:27:14 multinode-034000-m03 dockerd[829]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 25 19:27:14 multinode-034000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0425 12:27:14.447339    5605 out.go:239] * 
	* 
	W0425 12:27:14.450127    5605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 12:27:14.472263    5605 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0425 12:25:56.920458    5605 out.go:291] Setting OutFile to fd 1 ...
I0425 12:25:56.920822    5605 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 12:25:56.920828    5605 out.go:304] Setting ErrFile to fd 2...
I0425 12:25:56.920831    5605 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 12:25:56.921018    5605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
I0425 12:25:56.921377    5605 mustload.go:65] Loading cluster: multinode-034000
I0425 12:25:56.921681    5605 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 12:25:56.922045    5605 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 12:25:56.922085    5605 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 12:25:56.930424    5605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52804
I0425 12:25:56.930867    5605 main.go:141] libmachine: () Calling .GetVersion
I0425 12:25:56.931305    5605 main.go:141] libmachine: Using API Version  1
I0425 12:25:56.931337    5605 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 12:25:56.931603    5605 main.go:141] libmachine: () Calling .GetMachineName
I0425 12:25:56.931722    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
I0425 12:25:56.931819    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 12:25:56.931883    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5383
I0425 12:25:56.933041    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5383 missing from process table
W0425 12:25:56.933095    5605 host.go:58] "multinode-034000-m03" host status: Stopped
I0425 12:25:56.954924    5605 out.go:177] * Starting "multinode-034000-m03" worker node in "multinode-034000" cluster
I0425 12:25:56.975543    5605 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0425 12:25:56.975592    5605 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0425 12:25:56.975609    5605 cache.go:56] Caching tarball of preloaded images
I0425 12:25:56.975781    5605 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0425 12:25:56.975797    5605 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0425 12:25:56.975927    5605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
I0425 12:25:56.976470    5605 start.go:360] acquireMachinesLock for multinode-034000-m03: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0425 12:25:56.976552    5605 start.go:364] duration metric: took 55.55µs to acquireMachinesLock for "multinode-034000-m03"
I0425 12:25:56.976571    5605 start.go:96] Skipping create...Using existing machine configuration
I0425 12:25:56.976583    5605 fix.go:54] fixHost starting: m03
I0425 12:25:56.976879    5605 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 12:25:56.976899    5605 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 12:25:56.985353    5605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52806
I0425 12:25:56.985703    5605 main.go:141] libmachine: () Calling .GetVersion
I0425 12:25:56.986037    5605 main.go:141] libmachine: Using API Version  1
I0425 12:25:56.986048    5605 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 12:25:56.986250    5605 main.go:141] libmachine: () Calling .GetMachineName
I0425 12:25:56.986362    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:25:56.986453    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
I0425 12:25:56.986529    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 12:25:56.986601    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5383
I0425 12:25:56.987834    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5383 missing from process table
I0425 12:25:56.987888    5605 fix.go:112] recreateIfNeeded on multinode-034000-m03: state=Stopped err=<nil>
I0425 12:25:56.987911    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
W0425 12:25:56.987996    5605 fix.go:138] unexpected machine state, will restart: <nil>
I0425 12:25:57.009418    5605 out.go:177] * Restarting existing hyperkit VM for "multinode-034000-m03" ...
I0425 12:25:57.030351    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .Start
I0425 12:25:57.030529    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 12:25:57.030556    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid
I0425 12:25:57.030599    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Using UUID c849e54d-01ec-4b42-86e6-91828949bf04
I0425 12:25:57.059087    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Generated MAC aa:be:2a:d5:f9:e
I0425 12:25:57.059119    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
I0425 12:25:57.059316    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c849e54d-01ec-4b42-86e6-91828949bf04", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fefc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
I0425 12:25:57.059402    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c849e54d-01ec-4b42-86e6-91828949bf04", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fefc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
I0425 12:25:57.059485    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c849e54d-01ec-4b42-86e6-91828949bf04", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/multinode-034000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage,/Users/je
nkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
I0425 12:25:57.059548    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c849e54d-01ec-4b42-86e6-91828949bf04 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/multinode-034000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multi
node-034000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
I0425 12:25:57.059592    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0425 12:25:57.061161    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 DEBUG: hyperkit: Pid is 5609
I0425 12:25:57.061728    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Attempt 0
I0425 12:25:57.061759    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 12:25:57.061836    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
I0425 12:25:57.064633    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Searching for aa:be:2a:d5:f9:e in /var/db/dhcpd_leases ...
I0425 12:25:57.064698    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Found 17 entries in /var/db/dhcpd_leases!
I0425 12:25:57.064714    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:aa:be:2a:d5:f9:e ID:1,aa:be:2a:d5:f9:e Lease:0x662aae43}
I0425 12:25:57.064730    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | Found match: aa:be:2a:d5:f9:e
I0425 12:25:57.064752    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | IP: 192.169.0.18
I0425 12:25:57.064874    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetConfigRaw
I0425 12:25:57.065530    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
I0425 12:25:57.065796    5605 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
I0425 12:25:57.066577    5605 machine.go:94] provisionDockerMachine start ...
I0425 12:25:57.066598    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:25:57.066796    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:25:57.066935    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:25:57.067073    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:25:57.067186    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:25:57.067281    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:25:57.067479    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:25:57.067694    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:25:57.067704    5605 main.go:141] libmachine: About to run SSH command:
hostname
I0425 12:25:57.070233    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0425 12:25:57.078909    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0425 12:25:57.079908    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0425 12:25:57.079933    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0425 12:25:57.079947    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0425 12:25:57.079969    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0425 12:25:57.464116    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0425 12:25:57.464130    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0425 12:25:57.578995    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0425 12:25:57.579015    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0425 12:25:57.579024    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0425 12:25:57.579033    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0425 12:25:57.579854    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0425 12:25:57.579864    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:25:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0425 12:26:02.895465    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0425 12:26:02.895481    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0425 12:26:02.895491    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0425 12:26:02.919023    5605 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:26:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0425 12:26:10.241764    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0425 12:26:10.241793    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
I0425 12:26:10.241942    5605 buildroot.go:166] provisioning hostname "multinode-034000-m03"
I0425 12:26:10.241953    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
I0425 12:26:10.242046    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.242131    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:10.242223    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.242314    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.242405    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:10.242522    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:26:10.242673    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:26:10.242682    5605 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-034000-m03 && echo "multinode-034000-m03" | sudo tee /etc/hostname
I0425 12:26:10.316663    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000-m03

                                                
                                                
I0425 12:26:10.316682    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.316818    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:10.316914    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.317004    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.317118    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:10.317239    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:26:10.317379    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:26:10.317392    5605 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-034000-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-034000-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0425 12:26:10.388211    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0425 12:26:10.388231    5605 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
I0425 12:26:10.388245    5605 buildroot.go:174] setting up certificates
I0425 12:26:10.388253    5605 provision.go:84] configureAuth start
I0425 12:26:10.388260    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
I0425 12:26:10.388394    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
I0425 12:26:10.388487    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.388566    5605 provision.go:143] copyHostCerts
I0425 12:26:10.388600    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
I0425 12:26:10.388669    5605 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
I0425 12:26:10.388678    5605 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
I0425 12:26:10.388829    5605 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
I0425 12:26:10.389063    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
I0425 12:26:10.389106    5605 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
I0425 12:26:10.389113    5605 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
I0425 12:26:10.389212    5605 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
I0425 12:26:10.389353    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
I0425 12:26:10.389398    5605 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
I0425 12:26:10.389403    5605 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
I0425 12:26:10.389489    5605 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
I0425 12:26:10.389632    5605 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000-m03 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-034000-m03]
I0425 12:26:10.569011    5605 provision.go:177] copyRemoteCerts
I0425 12:26:10.569077    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0425 12:26:10.569096    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.569236    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:10.569330    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.569422    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:10.569515    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
I0425 12:26:10.607354    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
I0425 12:26:10.607430    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I0425 12:26:10.626767    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0425 12:26:10.626831    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0425 12:26:10.646498    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0425 12:26:10.646563    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0425 12:26:10.666096    5605 provision.go:87] duration metric: took 277.822029ms to configureAuth
I0425 12:26:10.666109    5605 buildroot.go:189] setting minikube options for container-runtime
I0425 12:26:10.666281    5605 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 12:26:10.666300    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:26:10.666430    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.666539    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:10.666623    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.666713    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.666802    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:10.666914    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:26:10.667046    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:26:10.667054    5605 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0425 12:26:10.731990    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0425 12:26:10.732003    5605 buildroot.go:70] root file system type: tmpfs
I0425 12:26:10.732079    5605 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0425 12:26:10.732112    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.732246    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:10.732340    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.732424    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.732504    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:10.732619    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:26:10.732751    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:26:10.732796    5605 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0425 12:26:10.809770    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0425 12:26:10.809796    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:10.809953    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:10.810055    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.810139    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:10.810228    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:10.810355    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:26:10.810504    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:26:10.810516    5605 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0425 12:26:12.371824    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0425 12:26:12.371841    5605 machine.go:97] duration metric: took 15.304791754s to provisionDockerMachine
I0425 12:26:12.371851    5605 start.go:293] postStartSetup for "multinode-034000-m03" (driver="hyperkit")
I0425 12:26:12.371859    5605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0425 12:26:12.371868    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:26:12.372057    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0425 12:26:12.372071    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:12.372167    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:12.372262    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:12.372342    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:12.372415    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
I0425 12:26:12.411023    5605 ssh_runner.go:195] Run: cat /etc/os-release
I0425 12:26:12.414121    5605 info.go:137] Remote host: Buildroot 2023.02.9
I0425 12:26:12.414134    5605 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
I0425 12:26:12.414273    5605 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
I0425 12:26:12.414450    5605 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
I0425 12:26:12.414462    5605 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
I0425 12:26:12.414681    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0425 12:26:12.421764    5605 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
I0425 12:26:12.441656    5605 start.go:296] duration metric: took 69.793532ms for postStartSetup
I0425 12:26:12.441680    5605 fix.go:56] duration metric: took 15.464636812s for fixHost
I0425 12:26:12.441693    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:12.441822    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:12.441923    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:12.442006    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:12.442111    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:12.442227    5605 main.go:141] libmachine: Using SSH client type: native
I0425 12:26:12.442377    5605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7bb80] 0xfb7e8e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
I0425 12:26:12.442388    5605 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0425 12:26:12.511764    5605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073172.617616605

                                                
                                                
I0425 12:26:12.511775    5605 fix.go:216] guest clock: 1714073172.617616605
I0425 12:26:12.511781    5605 fix.go:229] Guest: 2024-04-25 12:26:12.617616605 -0700 PDT Remote: 2024-04-25 12:26:12.441684 -0700 PDT m=+15.564536883 (delta=175.932605ms)
I0425 12:26:12.511802    5605 fix.go:200] guest clock delta is within tolerance: 175.932605ms
I0425 12:26:12.511807    5605 start.go:83] releasing machines lock for "multinode-034000-m03", held for 15.534781963s
I0425 12:26:12.511827    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:26:12.511955    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
I0425 12:26:12.512050    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:26:12.512350    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:26:12.512448    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
I0425 12:26:12.512526    5605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0425 12:26:12.512560    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:12.512604    5605 ssh_runner.go:195] Run: systemctl --version
I0425 12:26:12.512614    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
I0425 12:26:12.512653    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:12.512697    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
I0425 12:26:12.512731    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:12.512782    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
I0425 12:26:12.512804    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:12.512881    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
I0425 12:26:12.512900    5605 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
I0425 12:26:12.512991    5605 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
I0425 12:26:12.553047    5605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0425 12:26:12.603557    5605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0425 12:26:12.603647    5605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0425 12:26:12.616926    5605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0425 12:26:12.616940    5605 start.go:494] detecting cgroup driver to use...
I0425 12:26:12.617048    5605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0425 12:26:12.631943    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0425 12:26:12.641237    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0425 12:26:12.650173    5605 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0425 12:26:12.650238    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0425 12:26:12.659312    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0425 12:26:12.668294    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0425 12:26:12.677047    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0425 12:26:12.686036    5605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0425 12:26:12.695079    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0425 12:26:12.704171    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0425 12:26:12.715711    5605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0425 12:26:12.724771    5605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0425 12:26:12.732865    5605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0425 12:26:12.741005    5605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0425 12:26:12.843167    5605 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0425 12:26:12.862413    5605 start.go:494] detecting cgroup driver to use...
I0425 12:26:12.862496    5605 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0425 12:26:12.879363    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0425 12:26:12.890124    5605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0425 12:26:12.913617    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0425 12:26:12.924863    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0425 12:26:12.934685    5605 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0425 12:26:12.956737    5605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0425 12:26:12.967290    5605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0425 12:26:12.982168    5605 ssh_runner.go:195] Run: which cri-dockerd
I0425 12:26:12.985062    5605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0425 12:26:12.992162    5605 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0425 12:26:13.005653    5605 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0425 12:26:13.106648    5605 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0425 12:26:13.239670    5605 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0425 12:26:13.239753    5605 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0425 12:26:13.253753    5605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0425 12:26:13.347526    5605 ssh_runner.go:195] Run: sudo systemctl restart docker
I0425 12:27:14.390509    5605 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.041128879s)
I0425 12:27:14.390590    5605 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0425 12:27:14.425758    5605 out.go:177] 
W0425 12:27:14.447245    5605 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 25 19:26:11 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.253889099Z" level=info msg="Starting up"
Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.254513791Z" level=info msg="containerd not running, starting managed containerd"
Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.255040925Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=509
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.273744425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288734282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288813397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288878095Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288914111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289122936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289174770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289315579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289360669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289392184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289420988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289570778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289823017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291537592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291595364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291733876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291776432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291909120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291976491Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.292008486Z" level=info msg="metadata content store policy set" policy=shared
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293399881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293458978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293496160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293531988Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293563578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293630728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293914853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293999712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294037107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294068059Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294098693Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294129466Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294159331Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294189892Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294226520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294257577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294287263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294316964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294352107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294383048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294416304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294446337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294475349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294504825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294533936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294562715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294593483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294623806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294654923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294684103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294713006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294746389Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294781308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294811069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294840197Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294913300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294956744Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294986675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295082452Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295148848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295182548Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295210008Z" level=info msg="NRI interface is disabled by configuration."
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295410075Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295497093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295583473Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295654988Z" level=info msg="containerd successfully booted in 0.022535s"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.271264701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.287531973Z" level=info msg="Loading containers: start."
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.410873971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.447119268Z" level=info msg="Loading containers: done."
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457562430Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457726187Z" level=info msg="Daemon has completed initialization"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476547218Z" level=info msg="API listen on [::]:2376"
Apr 25 19:26:12 multinode-034000-m03 systemd[1]: Started Docker Application Container Engine.
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476664959Z" level=info msg="API listen on /var/run/docker.sock"
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.466160161Z" level=info msg="Processing signal 'terminated'"
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467050820Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467221822Z" level=info msg="Daemon shutdown complete"
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467361507Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467377635Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 25 19:26:13 multinode-034000-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 25 19:26:14 multinode-034000-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 25 19:26:14 multinode-034000-m03 dockerd[829]: time="2024-04-25T19:26:14.521860774Z" level=info msg="Starting up"
Apr 25 19:27:14 multinode-034000-m03 dockerd[829]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 25 19:27:14 multinode-034000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 25 19:26:11 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.253889099Z" level=info msg="Starting up"
Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.254513791Z" level=info msg="containerd not running, starting managed containerd"
Apr 25 19:26:11 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:11.255040925Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=509
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.273744425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288734282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288813397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288878095Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.288914111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289122936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289174770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289315579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289360669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289392184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289420988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289570778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.289823017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291537592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291595364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291733876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291776432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291909120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.291976491Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.292008486Z" level=info msg="metadata content store policy set" policy=shared
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293399881Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293458978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293496160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293531988Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293563578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293630728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293914853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.293999712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294037107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294068059Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294098693Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294129466Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294159331Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294189892Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294226520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294257577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294287263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294316964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294352107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294383048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294416304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294446337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294475349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294504825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294533936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294562715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294593483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294623806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294654923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294684103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294713006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294746389Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294781308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294811069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294840197Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294913300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294956744Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.294986675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295082452Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295148848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295182548Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295210008Z" level=info msg="NRI interface is disabled by configuration."
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295410075Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295497093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295583473Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 25 19:26:11 multinode-034000-m03 dockerd[509]: time="2024-04-25T19:26:11.295654988Z" level=info msg="containerd successfully booted in 0.022535s"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.271264701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.287531973Z" level=info msg="Loading containers: start."
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.410873971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.447119268Z" level=info msg="Loading containers: done."
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457562430Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.457726187Z" level=info msg="Daemon has completed initialization"
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476547218Z" level=info msg="API listen on [::]:2376"
Apr 25 19:26:12 multinode-034000-m03 systemd[1]: Started Docker Application Container Engine.
Apr 25 19:26:12 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:12.476664959Z" level=info msg="API listen on /var/run/docker.sock"
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.466160161Z" level=info msg="Processing signal 'terminated'"
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467050820Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467221822Z" level=info msg="Daemon shutdown complete"
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467361507Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 25 19:26:13 multinode-034000-m03 dockerd[503]: time="2024-04-25T19:26:13.467377635Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 25 19:26:13 multinode-034000-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 25 19:26:14 multinode-034000-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 25 19:26:14 multinode-034000-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 25 19:26:14 multinode-034000-m03 dockerd[829]: time="2024-04-25T19:26:14.521860774Z" level=info msg="Starting up"
Apr 25 19:27:14 multinode-034000-m03 dockerd[829]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 25 19:27:14 multinode-034000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 25 19:27:14 multinode-034000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0425 12:27:14.447339    5605 out.go:239] * 
* 
W0425 12:27:14.450127    5605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0425 12:27:14.472263    5605 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-034000 node start m03 -v=7 --alsologtostderr": exit status 90
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (329.382729ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:14.578333    5626 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:14.578605    5626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:14.578611    5626 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:14.578615    5626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:14.578791    5626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:14.578979    5626 out.go:298] Setting JSON to false
	I0425 12:27:14.579004    5626 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:14.579047    5626 notify.go:220] Checking for updates...
	I0425 12:27:14.579317    5626 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:14.579331    5626 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:14.579765    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.579824    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.588745    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52828
	I0425 12:27:14.589103    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.589505    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.589514    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.589716    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.589825    5626 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:14.589911    5626 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:14.589972    5626 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:14.590926    5626 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:14.590940    5626 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:14.591173    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.591192    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.599674    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52830
	I0425 12:27:14.599987    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.600315    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.600326    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.600528    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.600636    5626 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:14.600720    5626 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:14.600963    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.600984    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.609505    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52832
	I0425 12:27:14.609830    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.610137    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.610153    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.610334    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.610457    5626 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:14.610592    5626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:14.610621    5626 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:14.610696    5626 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:14.610802    5626 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:14.610883    5626 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:14.610963    5626 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:14.643571    5626 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:14.648108    5626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:14.659713    5626 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:14.659736    5626 api_server.go:166] Checking apiserver status ...
	I0425 12:27:14.659772    5626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:14.671247    5626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:14.683066    5626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:14.683120    5626 ssh_runner.go:195] Run: ls
	I0425 12:27:14.686635    5626 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:14.690197    5626 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:14.690209    5626 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:14.690219    5626 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:14.690229    5626 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:14.690488    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.690508    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.699093    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52836
	I0425 12:27:14.699394    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.699713    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.699723    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.699909    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.700014    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:14.700095    5626 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:14.700163    5626 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:14.701106    5626 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:14.701113    5626 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:14.701351    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.701369    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.709935    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52838
	I0425 12:27:14.710256    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.710619    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.710636    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.710861    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.710959    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:14.711038    5626 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:14.711292    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.711314    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.719840    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52840
	I0425 12:27:14.720160    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.720512    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.720530    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.720744    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.720857    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:14.720979    5626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:14.720991    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:14.721073    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:14.721154    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:14.721238    5626 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:14.721304    5626 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:14.752218    5626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:14.764107    5626 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:14.764126    5626 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:14.764405    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.764428    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.773261    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52843
	I0425 12:27:14.773596    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.773924    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.773933    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.774152    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.774269    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:14.774359    5626 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:14.774438    5626 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:14.775442    5626 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:14.775451    5626 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:14.775714    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.775754    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.784781    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52845
	I0425 12:27:14.785151    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.785486    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.785514    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.785734    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.785854    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:14.785927    5626 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:14.786207    5626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:14.786228    5626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:14.794798    5626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52847
	I0425 12:27:14.795123    5626 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:14.795454    5626 main.go:141] libmachine: Using API Version  1
	I0425 12:27:14.795465    5626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:14.795702    5626 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:14.795832    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:14.796000    5626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:14.796011    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:14.796094    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:14.796178    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:14.796262    5626 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:14.796346    5626 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:14.832319    5626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:14.842541    5626 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (326.031999ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:16.185195    5637 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:16.185400    5637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:16.185405    5637 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:16.185409    5637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:16.185592    5637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:16.185767    5637 out.go:298] Setting JSON to false
	I0425 12:27:16.185790    5637 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:16.185828    5637 notify.go:220] Checking for updates...
	I0425 12:27:16.187127    5637 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:16.187145    5637 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:16.187506    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.187548    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.196303    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52851
	I0425 12:27:16.196628    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.197011    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.197020    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.197285    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.197400    5637 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:16.197493    5637 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:16.197558    5637 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:16.198532    5637 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:16.198551    5637 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:16.198793    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.198814    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.207130    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52853
	I0425 12:27:16.207443    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.207798    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.207811    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.208069    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.208191    5637 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:16.208277    5637 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:16.208564    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.208594    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.221199    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52855
	I0425 12:27:16.221544    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.221949    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.221990    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.222198    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.222314    5637 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:16.222460    5637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:16.222478    5637 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:16.222569    5637 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:16.222648    5637 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:16.222729    5637 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:16.222805    5637 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:16.255026    5637 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:16.259470    5637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:16.270759    5637 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:16.270784    5637 api_server.go:166] Checking apiserver status ...
	I0425 12:27:16.270823    5637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:16.282023    5637 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:16.290378    5637 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:16.290433    5637 ssh_runner.go:195] Run: ls
	I0425 12:27:16.293495    5637 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:16.296561    5637 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:16.296573    5637 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:16.296581    5637 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:16.296595    5637 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:16.296836    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.296857    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.305682    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52859
	I0425 12:27:16.306032    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.306403    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.306420    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.306645    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.306803    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:16.306897    5637 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:16.306972    5637 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:16.307947    5637 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:16.307955    5637 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:16.308227    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.308249    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.317083    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52861
	I0425 12:27:16.317454    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.317795    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.317812    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.317996    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.318106    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:16.318195    5637 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:16.318452    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.318474    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.327129    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52863
	I0425 12:27:16.327494    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.327850    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.327868    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.328088    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.328199    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:16.328322    5637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:16.328333    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:16.328406    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:16.328496    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:16.328585    5637 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:16.328665    5637 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:16.359085    5637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:16.370338    5637 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:16.370355    5637 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:16.370627    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.370649    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.379593    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52866
	I0425 12:27:16.379926    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.380227    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.380235    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.380458    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.380575    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:16.380662    5637 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:16.380737    5637 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:16.381697    5637 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:16.381706    5637 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:16.381953    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.381985    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.390451    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52868
	I0425 12:27:16.390812    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.391145    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.391153    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.391349    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.391473    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:16.391570    5637 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:16.391819    5637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:16.391840    5637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:16.400224    5637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52870
	I0425 12:27:16.400540    5637 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:16.400894    5637 main.go:141] libmachine: Using API Version  1
	I0425 12:27:16.400912    5637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:16.401140    5637 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:16.401238    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:16.401366    5637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:16.401378    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:16.401449    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:16.401538    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:16.401634    5637 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:16.401714    5637 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:16.438462    5637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:16.448668    5637 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (321.178735ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:18.480042    5649 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:18.480224    5649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:18.480235    5649 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:18.480239    5649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:18.480427    5649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:18.480621    5649 out.go:298] Setting JSON to false
	I0425 12:27:18.480644    5649 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:18.480683    5649 notify.go:220] Checking for updates...
	I0425 12:27:18.482056    5649 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:18.482082    5649 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:18.482431    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.482483    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.491229    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52874
	I0425 12:27:18.491609    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.491996    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.492004    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.492212    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.492321    5649 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:18.492401    5649 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:18.492466    5649 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:18.493436    5649 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:18.493455    5649 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:18.493694    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.493712    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.502310    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52876
	I0425 12:27:18.502643    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.502974    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.502987    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.503191    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.503300    5649 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:18.503393    5649 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:18.503646    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.503686    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.512064    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52878
	I0425 12:27:18.512377    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.512703    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.512718    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.512926    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.513037    5649 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:18.513181    5649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:18.513202    5649 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:18.513276    5649 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:18.513355    5649 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:18.513434    5649 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:18.513509    5649 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:18.545934    5649 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:18.550342    5649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:18.562342    5649 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:18.562368    5649 api_server.go:166] Checking apiserver status ...
	I0425 12:27:18.562406    5649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:18.574100    5649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:18.582280    5649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:18.582326    5649 ssh_runner.go:195] Run: ls
	I0425 12:27:18.585818    5649 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:18.588858    5649 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:18.588871    5649 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:18.588879    5649 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:18.588890    5649 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:18.589141    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.589163    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.597644    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52882
	I0425 12:27:18.597969    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.598330    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.598349    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.598595    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.598710    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:18.598781    5649 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:18.598856    5649 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:18.599847    5649 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:18.599854    5649 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:18.600102    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.600123    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.608516    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52884
	I0425 12:27:18.608837    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.609151    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.609161    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.609399    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.609516    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:18.609606    5649 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:18.609857    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.609887    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.618382    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52886
	I0425 12:27:18.618695    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.619032    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.619046    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.619259    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.619363    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:18.619476    5649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:18.619492    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:18.619559    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:18.619631    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:18.619698    5649 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:18.619766    5649 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:18.650000    5649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:18.660261    5649 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:18.660280    5649 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:18.660549    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.660573    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.669177    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52889
	I0425 12:27:18.669503    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.669816    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.669831    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.670064    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.670178    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:18.670265    5649 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:18.670344    5649 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:18.671349    5649 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:18.671357    5649 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:18.671612    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.671637    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.680156    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52891
	I0425 12:27:18.680476    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.680786    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.680804    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.681023    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.681132    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:18.681221    5649 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:18.681477    5649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:18.681498    5649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:18.689865    5649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52893
	I0425 12:27:18.690202    5649 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:18.690540    5649 main.go:141] libmachine: Using API Version  1
	I0425 12:27:18.690554    5649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:18.690759    5649 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:18.690871    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:18.690997    5649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:18.691008    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:18.691095    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:18.691174    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:18.691264    5649 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:18.691338    5649 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:18.728165    5649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:18.738933    5649 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (321.753103ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:20.281903    5660 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:20.282094    5660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:20.282100    5660 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:20.282104    5660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:20.282286    5660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:20.282458    5660 out.go:298] Setting JSON to false
	I0425 12:27:20.282478    5660 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:20.282524    5660 notify.go:220] Checking for updates...
	I0425 12:27:20.282803    5660 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:20.282815    5660 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:20.283159    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.283214    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.291816    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52897
	I0425 12:27:20.292140    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.292538    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.292548    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.292805    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.292934    5660 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:20.293034    5660 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:20.293090    5660 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:20.294091    5660 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:20.294111    5660 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:20.294354    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.294376    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.302665    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52899
	I0425 12:27:20.303009    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.303325    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.303338    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.303537    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.303655    5660 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:20.303747    5660 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:20.303994    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.304021    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.312304    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52901
	I0425 12:27:20.312611    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.312951    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.312962    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.313183    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.313297    5660 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:20.313445    5660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:20.313465    5660 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:20.313559    5660 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:20.313636    5660 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:20.313760    5660 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:20.313873    5660 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:20.347575    5660 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:20.351884    5660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:20.362638    5660 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:20.362661    5660 api_server.go:166] Checking apiserver status ...
	I0425 12:27:20.362701    5660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:20.373648    5660 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:20.381106    5660 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:20.381162    5660 ssh_runner.go:195] Run: ls
	I0425 12:27:20.384594    5660 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:20.387580    5660 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:20.387593    5660 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:20.387602    5660 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:20.387615    5660 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:20.387856    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.387876    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.396651    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52905
	I0425 12:27:20.397022    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.397418    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.397436    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.397670    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.397793    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:20.397890    5660 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:20.397973    5660 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:20.398969    5660 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:20.398980    5660 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:20.399243    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.399270    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.407739    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52907
	I0425 12:27:20.408061    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.408366    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.408375    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.408601    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.408714    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:20.408795    5660 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:20.409044    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.409067    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.417566    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52909
	I0425 12:27:20.417898    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.418217    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.418231    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.418461    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.418571    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:20.418690    5660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:20.418701    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:20.418785    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:20.418867    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:20.418941    5660 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:20.419013    5660 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:20.450856    5660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:20.462096    5660 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:20.462112    5660 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:20.462377    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.462399    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.470968    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52912
	I0425 12:27:20.471313    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.471647    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.471666    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.471867    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.471977    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:20.472055    5660 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:20.472126    5660 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:20.473143    5660 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:20.473152    5660 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:20.473392    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.473413    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.482041    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52914
	I0425 12:27:20.482382    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.482682    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.482693    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.482915    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.483031    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:20.483113    5660 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:20.483375    5660 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:20.483407    5660 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:20.491777    5660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52916
	I0425 12:27:20.492099    5660 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:20.492400    5660 main.go:141] libmachine: Using API Version  1
	I0425 12:27:20.492411    5660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:20.492634    5660 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:20.492738    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:20.492857    5660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:20.492868    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:20.492945    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:20.493028    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:20.493133    5660 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:20.493207    5660 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:20.529157    5660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:20.539321    5660 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (321.064628ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:25.317603    5671 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:25.317889    5671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:25.317895    5671 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:25.317898    5671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:25.318078    5671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:25.318258    5671 out.go:298] Setting JSON to false
	I0425 12:27:25.318280    5671 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:25.318329    5671 notify.go:220] Checking for updates...
	I0425 12:27:25.318603    5671 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:25.318617    5671 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:25.318970    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.319008    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.327705    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52920
	I0425 12:27:25.328124    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.328549    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.328558    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.328756    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.328854    5671 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:25.328935    5671 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:25.329003    5671 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:25.330007    5671 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:25.330023    5671 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:25.330249    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.330271    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.338738    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52922
	I0425 12:27:25.339063    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.339410    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.339425    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.339713    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.339833    5671 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:25.339926    5671 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:25.340180    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.340212    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.348636    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52924
	I0425 12:27:25.348952    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.349282    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.349299    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.349522    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.349638    5671 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:25.349777    5671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:25.349799    5671 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:25.349880    5671 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:25.349966    5671 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:25.350066    5671 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:25.350141    5671 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:25.383267    5671 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:25.387692    5671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:25.398873    5671 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:25.398896    5671 api_server.go:166] Checking apiserver status ...
	I0425 12:27:25.398938    5671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:25.409865    5671 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:25.417190    5671 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:25.417242    5671 ssh_runner.go:195] Run: ls
	I0425 12:27:25.420367    5671 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:25.423451    5671 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:25.423463    5671 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:25.423472    5671 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:25.423483    5671 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:25.423726    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.423745    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.432650    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52928
	I0425 12:27:25.433015    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.433355    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.433366    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.433600    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.433725    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:25.433811    5671 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:25.433899    5671 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:25.434891    5671 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:25.434901    5671 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:25.435153    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.435178    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.443641    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52930
	I0425 12:27:25.443979    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.444286    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.444303    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.444513    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.444634    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:25.444719    5671 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:25.444988    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.445017    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.453392    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52932
	I0425 12:27:25.453717    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.454052    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.454061    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.454256    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.454365    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:25.454489    5671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:25.454501    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:25.454590    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:25.454661    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:25.454729    5671 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:25.454812    5671 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:25.485211    5671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:25.496536    5671 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:25.496568    5671 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:25.496852    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.496882    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.505405    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52935
	I0425 12:27:25.505738    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.506070    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.506085    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.506305    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.506435    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:25.506517    5671 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:25.506592    5671 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:25.507606    5671 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:25.507614    5671 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:25.507868    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.507900    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.516438    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52937
	I0425 12:27:25.516766    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.517132    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.517149    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.517356    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.517454    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:25.517529    5671 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:25.517791    5671 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:25.517821    5671 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:25.526166    5671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52939
	I0425 12:27:25.526504    5671 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:25.526844    5671 main.go:141] libmachine: Using API Version  1
	I0425 12:27:25.526857    5671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:25.527068    5671 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:25.527175    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:25.527299    5671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:25.527309    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:25.527394    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:25.527475    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:25.527559    5671 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:25.527628    5671 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:25.565015    5671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:25.575661    5671 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0425 12:27:26.179634    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (322.215066ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:31.890376    5687 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:31.890667    5687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:31.890672    5687 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:31.890676    5687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:31.890857    5687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:31.891032    5687 out.go:298] Setting JSON to false
	I0425 12:27:31.891055    5687 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:31.891097    5687 notify.go:220] Checking for updates...
	I0425 12:27:31.891374    5687 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:31.891387    5687 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:31.891764    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:31.891814    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:31.900470    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52943
	I0425 12:27:31.900824    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:31.901246    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:31.901256    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:31.901501    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:31.901633    5687 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:31.901736    5687 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:31.901803    5687 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:31.902777    5687 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:31.902797    5687 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:31.903034    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:31.903090    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:31.911386    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52945
	I0425 12:27:31.911706    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:31.912031    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:31.912042    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:31.912237    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:31.912359    5687 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:31.912441    5687 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:31.912717    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:31.912744    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:31.921127    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52947
	I0425 12:27:31.921433    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:31.921784    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:31.921805    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:31.922006    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:31.922108    5687 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:31.922243    5687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:31.922267    5687 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:31.922351    5687 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:31.922437    5687 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:31.922524    5687 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:31.922605    5687 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:31.955647    5687 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:31.960083    5687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:31.971855    5687 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:31.971880    5687 api_server.go:166] Checking apiserver status ...
	I0425 12:27:31.971918    5687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:31.983964    5687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:31.992087    5687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:31.992130    5687 ssh_runner.go:195] Run: ls
	I0425 12:27:31.995327    5687 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:31.998361    5687 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:31.998373    5687 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:31.998382    5687 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:31.998393    5687 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:31.998635    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:31.998667    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:32.007212    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52951
	I0425 12:27:32.007529    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:32.007843    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:32.007854    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:32.008085    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:32.008194    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:32.008270    5687 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:32.008346    5687 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:32.009294    5687 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:32.009301    5687 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:32.009539    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:32.009562    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:32.017972    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52953
	I0425 12:27:32.018316    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:32.018650    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:32.018662    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:32.018873    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:32.018982    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:32.019073    5687 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:32.019316    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:32.019341    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:32.027665    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52955
	I0425 12:27:32.028002    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:32.028349    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:32.028363    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:32.028575    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:32.028680    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:32.028830    5687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:32.028842    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:32.028919    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:32.028989    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:32.029076    5687 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:32.029149    5687 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:32.060363    5687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:32.071721    5687 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:32.071738    5687 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:32.072012    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:32.072045    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:32.080670    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52958
	I0425 12:27:32.081012    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:32.081362    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:32.081379    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:32.081583    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:32.081682    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:32.081765    5687 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:32.081830    5687 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:32.082789    5687 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:32.082799    5687 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:32.083042    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:32.083069    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:32.091467    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52960
	I0425 12:27:32.091802    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:32.092105    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:32.092115    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:32.092325    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:32.092442    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:32.092517    5687 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:32.092770    5687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:32.092802    5687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:32.101282    5687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52962
	I0425 12:27:32.101611    5687 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:32.101951    5687 main.go:141] libmachine: Using API Version  1
	I0425 12:27:32.101969    5687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:32.102190    5687 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:32.102308    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:32.102443    5687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:32.102455    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:32.102533    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:32.102626    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:32.102718    5687 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:32.102798    5687 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:32.139844    5687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:32.149820    5687 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (321.560892ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:36.517679    5701 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:36.517879    5701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:36.517884    5701 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:36.517888    5701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:36.518088    5701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:36.518267    5701 out.go:298] Setting JSON to false
	I0425 12:27:36.518290    5701 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:36.518330    5701 notify.go:220] Checking for updates...
	I0425 12:27:36.519461    5701 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:36.519483    5701 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:36.519836    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.519876    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.528508    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52966
	I0425 12:27:36.528882    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.529321    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.529338    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.529546    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.529662    5701 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:36.529751    5701 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:36.529824    5701 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:36.530788    5701 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:36.530808    5701 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:36.531048    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.531066    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.539375    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52968
	I0425 12:27:36.539844    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.540194    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.540207    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.540472    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.540609    5701 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:36.540702    5701 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:36.540955    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.540983    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.549416    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52970
	I0425 12:27:36.549722    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.550058    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.550076    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.550268    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.550367    5701 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:36.550533    5701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:36.550554    5701 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:36.550639    5701 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:36.550716    5701 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:36.550817    5701 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:36.550899    5701 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:36.583621    5701 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:36.587875    5701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:36.599704    5701 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:36.599727    5701 api_server.go:166] Checking apiserver status ...
	I0425 12:27:36.599764    5701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:36.611704    5701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:36.619742    5701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:36.619783    5701 ssh_runner.go:195] Run: ls
	I0425 12:27:36.622871    5701 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:36.625906    5701 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:36.625918    5701 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:36.625927    5701 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:36.625947    5701 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:36.626184    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.626205    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.634803    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52974
	I0425 12:27:36.635139    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.635464    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.635479    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.635711    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.635834    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:36.635911    5701 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:36.635992    5701 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:36.636959    5701 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:36.636966    5701 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:36.637218    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.637241    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.645651    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52976
	I0425 12:27:36.645991    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.646332    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.646348    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.646547    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.646658    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:36.646748    5701 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:36.647033    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.647055    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.655496    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52978
	I0425 12:27:36.655855    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.656214    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.656228    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.656446    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.656570    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:36.656705    5701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:36.656717    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:36.656793    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:36.656878    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:36.656956    5701 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:36.657026    5701 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:36.687928    5701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:36.699988    5701 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:36.700025    5701 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:36.700310    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.700339    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.708870    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52981
	I0425 12:27:36.709197    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.709557    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.709574    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.709779    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.709887    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:36.709974    5701 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:36.710042    5701 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:36.711012    5701 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:36.711022    5701 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:36.711269    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.711294    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.719686    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52983
	I0425 12:27:36.720003    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.720330    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.720344    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.720560    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.720670    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:36.720740    5701 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:36.720979    5701 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:36.721000    5701 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:36.729343    5701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52985
	I0425 12:27:36.729664    5701 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:36.730004    5701 main.go:141] libmachine: Using API Version  1
	I0425 12:27:36.730020    5701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:36.730222    5701 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:36.730332    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:36.730458    5701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:36.730469    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:36.730539    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:36.730614    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:36.730699    5701 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:36.730768    5701 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:36.765789    5701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:36.775855    5701 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (323.018193ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:27:47.339175    5717 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:27:47.339384    5717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:47.339389    5717 out.go:304] Setting ErrFile to fd 2...
	I0425 12:27:47.339393    5717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:27:47.339558    5717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:27:47.339749    5717 out.go:298] Setting JSON to false
	I0425 12:27:47.339771    5717 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:27:47.339808    5717 notify.go:220] Checking for updates...
	I0425 12:27:47.341045    5717 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:27:47.341065    5717 status.go:255] checking status of multinode-034000 ...
	I0425 12:27:47.341485    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.341517    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.350463    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52989
	I0425 12:27:47.350797    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.351194    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.351210    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.351409    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.351516    5717 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:27:47.351602    5717 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:47.351687    5717 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:27:47.352615    5717 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:27:47.352631    5717 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:47.352862    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.352880    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.361246    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52991
	I0425 12:27:47.361581    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.361903    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.361914    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.362126    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.362230    5717 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:27:47.362313    5717 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:27:47.362556    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.362581    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.370913    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52993
	I0425 12:27:47.371210    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.371529    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.371544    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.371740    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.371854    5717 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:27:47.372007    5717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:47.372029    5717 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:27:47.372105    5717 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:27:47.372185    5717 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:27:47.372274    5717 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:27:47.372350    5717 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:27:47.405991    5717 ssh_runner.go:195] Run: systemctl --version
	I0425 12:27:47.410356    5717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:47.422231    5717 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:27:47.422254    5717 api_server.go:166] Checking apiserver status ...
	I0425 12:27:47.422294    5717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:27:47.434307    5717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:27:47.442263    5717 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:27:47.442313    5717 ssh_runner.go:195] Run: ls
	I0425 12:27:47.445707    5717 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:27:47.449250    5717 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:27:47.449263    5717 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:27:47.449272    5717 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:47.449292    5717 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:27:47.449557    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.449577    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.458073    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52997
	I0425 12:27:47.458416    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.458741    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.458749    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.458965    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.459083    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:27:47.459167    5717 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:47.459246    5717 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:27:47.460210    5717 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:27:47.460221    5717 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:47.460462    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.460483    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.468953    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52999
	I0425 12:27:47.469280    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.469587    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.469601    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.469808    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.469903    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:27:47.469976    5717 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:27:47.470241    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.470268    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.478552    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53001
	I0425 12:27:47.478886    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.479242    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.479263    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.479473    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.479584    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:27:47.479717    5717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:47.479730    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:27:47.479804    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:27:47.479883    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:27:47.479960    5717 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:27:47.480046    5717 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:27:47.510794    5717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:47.521045    5717 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:27:47.521063    5717 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:27:47.521352    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.521381    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.529899    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53004
	I0425 12:27:47.530238    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.530591    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.530606    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.530833    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.530960    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:27:47.531038    5717 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:27:47.531127    5717 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:27:47.532075    5717 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:27:47.532082    5717 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:47.532340    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.532369    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.540822    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53006
	I0425 12:27:47.541149    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.541470    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.541486    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.541682    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.541780    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:27:47.541863    5717 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:27:47.542103    5717 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:27:47.542131    5717 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:27:47.550642    5717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53008
	I0425 12:27:47.550969    5717 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:27:47.551288    5717 main.go:141] libmachine: Using API Version  1
	I0425 12:27:47.551306    5717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:27:47.551523    5717 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:27:47.551642    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:27:47.551762    5717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:27:47.551772    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:27:47.551850    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:27:47.551930    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:27:47.552006    5717 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:27:47.552108    5717 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:27:47.587222    5717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:27:47.598375    5717 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr: exit status 2 (329.923694ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:28:01.217241    5732 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:28:01.217556    5732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:28:01.217561    5732 out.go:304] Setting ErrFile to fd 2...
	I0425 12:28:01.217565    5732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:28:01.217776    5732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:28:01.217972    5732 out.go:298] Setting JSON to false
	I0425 12:28:01.217994    5732 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:28:01.218028    5732 notify.go:220] Checking for updates...
	I0425 12:28:01.218330    5732 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:28:01.218344    5732 status.go:255] checking status of multinode-034000 ...
	I0425 12:28:01.218681    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.218732    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.227918    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53012
	I0425 12:28:01.228280    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.228674    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.228684    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.228903    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.229017    5732 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:28:01.229097    5732 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:28:01.229182    5732 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:28:01.230108    5732 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:28:01.230128    5732 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:28:01.230363    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.230385    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.238849    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53014
	I0425 12:28:01.239196    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.239550    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.239569    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.239770    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.239860    5732 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:28:01.239947    5732 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:28:01.240213    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.240240    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.248614    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53016
	I0425 12:28:01.248920    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.249208    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.249217    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.249419    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.249535    5732 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:01.249670    5732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:28:01.249692    5732 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:01.249766    5732 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:01.249834    5732 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:01.249915    5732 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:01.250031    5732 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:28:01.283699    5732 ssh_runner.go:195] Run: systemctl --version
	I0425 12:28:01.287900    5732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:28:01.298898    5732 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:28:01.298925    5732 api_server.go:166] Checking apiserver status ...
	I0425 12:28:01.298966    5732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:28:01.309758    5732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:28:01.317389    5732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:28:01.317447    5732 ssh_runner.go:195] Run: ls
	I0425 12:28:01.320618    5732 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:28:01.323586    5732 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:28:01.323597    5732 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:28:01.323607    5732 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:28:01.323617    5732 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:28:01.323883    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.323903    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.332460    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53020
	I0425 12:28:01.332771    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.333112    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.333126    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.333318    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.333443    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:28:01.333539    5732 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:28:01.333605    5732 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:28:01.334555    5732 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:28:01.334565    5732 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:28:01.334802    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.334822    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.343261    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53022
	I0425 12:28:01.343578    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.343876    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.343884    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.344089    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.344224    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:28:01.344308    5732 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:28:01.344554    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.344578    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.353037    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53024
	I0425 12:28:01.353364    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.353704    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.353716    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.353920    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.354026    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:28:01.354153    5732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:28:01.354170    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:28:01.354247    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:28:01.354328    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:28:01.354408    5732 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:28:01.354487    5732 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:28:01.385265    5732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:28:01.406078    5732 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:28:01.406096    5732 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:28:01.406383    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.406415    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.415172    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53027
	I0425 12:28:01.415505    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.415815    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.415825    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.416027    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.416129    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:28:01.416206    5732 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:28:01.416278    5732 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:28:01.417233    5732 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:28:01.417251    5732 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:28:01.417492    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.417519    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.425996    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53029
	I0425 12:28:01.426309    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.426621    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.426631    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.426842    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.426960    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:28:01.427043    5732 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:28:01.427289    5732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:01.427317    5732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:01.435688    5732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53031
	I0425 12:28:01.436014    5732 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:01.436326    5732 main.go:141] libmachine: Using API Version  1
	I0425 12:28:01.436343    5732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:01.436528    5732 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:01.436638    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:28:01.436764    5732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:28:01.436775    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:28:01.436862    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:28:01.436953    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:28:01.437037    5732 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:28:01.437109    5732 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:28:01.473085    5732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:28:01.484285    5732 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-034000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-034000 logs -n 25: (2.077156291s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-034000 cp multinode-034000:/home/docker/cp-test.txt                                                               | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03:/home/docker/cp-test_multinode-034000_multinode-034000-m03.txt                                         |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000 sudo cat                                                                                                   |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000-m03 sudo cat                                                                       | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000_multinode-034000-m03.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp testdata/cp-test.txt                                                                                    | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1757473431/001/cp-test_multinode-034000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000:/home/docker/cp-test_multinode-034000-m02_multinode-034000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000 sudo cat                                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m02_multinode-034000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03:/home/docker/cp-test_multinode-034000-m02_multinode-034000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000-m03 sudo cat                                                                       | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m02_multinode-034000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp testdata/cp-test.txt                                                                                    | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1757473431/001/cp-test_multinode-034000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000:/home/docker/cp-test_multinode-034000-m03_multinode-034000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000 sudo cat                                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m03_multinode-034000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02:/home/docker/cp-test_multinode-034000-m03_multinode-034000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000-m02 sudo cat                                                                       | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m03_multinode-034000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-034000 node stop m03                                                                                              | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	| node    | multinode-034000 node start                                                                                                 | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 12:23:25
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 12:23:25.603316    5272 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:23:25.603519    5272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:23:25.603525    5272 out.go:304] Setting ErrFile to fd 2...
	I0425 12:23:25.603528    5272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:23:25.603711    5272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:23:25.605118    5272 out.go:298] Setting JSON to false
	I0425 12:23:25.626743    5272 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4975,"bootTime":1714068030,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 12:23:25.626839    5272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:23:25.655199    5272 out.go:177] * [multinode-034000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:23:25.696358    5272 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:23:25.696414    5272 notify.go:220] Checking for updates...
	I0425 12:23:25.740121    5272 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:23:25.761140    5272 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:23:25.781969    5272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:23:25.802908    5272 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 12:23:25.823958    5272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:23:25.845642    5272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:23:25.874793    5272 out.go:177] * Using the hyperkit driver based on user configuration
	I0425 12:23:25.916138    5272 start.go:297] selected driver: hyperkit
	I0425 12:23:25.916173    5272 start.go:901] validating driver "hyperkit" against <nil>
	I0425 12:23:25.916193    5272 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:23:25.920516    5272 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:23:25.920629    5272 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18757-1425/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0425 12:23:25.928798    5272 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0425 12:23:25.932660    5272 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:23:25.932685    5272 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0425 12:23:25.932718    5272 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 12:23:25.932921    5272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:23:25.932979    5272 cni.go:84] Creating CNI manager for ""
	I0425 12:23:25.932989    5272 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0425 12:23:25.932995    5272 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0425 12:23:25.933061    5272 start.go:340] cluster config:
	{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:23:25.933141    5272 iso.go:125] acquiring lock: {Name:mk776ce15f524979e50f0732af6183703dc958eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:23:25.975107    5272 out.go:177] * Starting "multinode-034000" primary control-plane node in "multinode-034000" cluster
	I0425 12:23:25.996043    5272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:23:25.996119    5272 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:23:25.996139    5272 cache.go:56] Caching tarball of preloaded images
	I0425 12:23:25.996339    5272 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:23:25.996363    5272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:23:25.996858    5272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:23:25.996900    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json: {Name:mk803fb48530ec8f7c1c2c22d0fdda78fb5e57fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:25.997496    5272 start.go:360] acquireMachinesLock for multinode-034000: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:23:25.997581    5272 start.go:364] duration metric: took 70.872µs to acquireMachinesLock for "multinode-034000"
	I0425 12:23:25.997612    5272 start.go:93] Provisioning new machine with config: &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 12:23:25.997674    5272 start.go:125] createHost starting for "" (driver="hyperkit")
	I0425 12:23:26.018940    5272 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 12:23:26.019287    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:23:26.019351    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:23:26.028831    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52334
	I0425 12:23:26.029193    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:23:26.029598    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:23:26.029624    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:23:26.029845    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:23:26.029963    5272 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:23:26.030120    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:26.030271    5272 start.go:159] libmachine.API.Create for "multinode-034000" (driver="hyperkit")
	I0425 12:23:26.030293    5272 client.go:168] LocalClient.Create starting
	I0425 12:23:26.030329    5272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem
	I0425 12:23:26.030384    5272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:23:26.030402    5272 main.go:141] libmachine: Parsing certificate...
	I0425 12:23:26.030458    5272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem
	I0425 12:23:26.030495    5272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:23:26.030506    5272 main.go:141] libmachine: Parsing certificate...
	I0425 12:23:26.030518    5272 main.go:141] libmachine: Running pre-create checks...
	I0425 12:23:26.030528    5272 main.go:141] libmachine: (multinode-034000) Calling .PreCreateCheck
	I0425 12:23:26.030611    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:26.030758    5272 main.go:141] libmachine: (multinode-034000) Calling .GetConfigRaw
	I0425 12:23:26.040650    5272 main.go:141] libmachine: Creating machine...
	I0425 12:23:26.040674    5272 main.go:141] libmachine: (multinode-034000) Calling .Create
	I0425 12:23:26.040901    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:26.041184    5272 main.go:141] libmachine: (multinode-034000) DBG | I0425 12:23:26.040878    5280 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 12:23:26.041296    5272 main.go:141] libmachine: (multinode-034000) Downloading /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18757-1425/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 12:23:26.213048    5272 main.go:141] libmachine: (multinode-034000) DBG | I0425 12:23:26.212948    5280 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa...
	I0425 12:23:26.361603    5272 main.go:141] libmachine: (multinode-034000) DBG | I0425 12:23:26.361472    5280 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk...
	I0425 12:23:26.361626    5272 main.go:141] libmachine: (multinode-034000) DBG | Writing magic tar header
	I0425 12:23:26.361638    5272 main.go:141] libmachine: (multinode-034000) DBG | Writing SSH key tar header
	I0425 12:23:26.362347    5272 main.go:141] libmachine: (multinode-034000) DBG | I0425 12:23:26.362287    5280 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000 ...
	I0425 12:23:26.717737    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:26.717759    5272 main.go:141] libmachine: (multinode-034000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid
	I0425 12:23:26.717789    5272 main.go:141] libmachine: (multinode-034000) DBG | Using UUID e458d994-a066-4236-8047-fdddf635d073
	I0425 12:23:26.828689    5272 main.go:141] libmachine: (multinode-034000) DBG | Generated MAC 1e:d3:c3:87:d3:c7
	I0425 12:23:26.828712    5272 main.go:141] libmachine: (multinode-034000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:23:26.828754    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e458d994-a066-4236-8047-fdddf635d073", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001ce240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0425 12:23:26.828785    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e458d994-a066-4236-8047-fdddf635d073", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001ce240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0425 12:23:26.828846    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e458d994-a066-4236-8047-fdddf635d073", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage,/Users/jenkins/minikube-integration/1875
7-1425/.minikube/machines/multinode-034000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:23:26.828881    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e458d994-a066-4236-8047-fdddf635d073 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:23:26.828896    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:23:26.831693    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 DEBUG: hyperkit: Pid is 5283
	I0425 12:23:26.832218    5272 main.go:141] libmachine: (multinode-034000) DBG | Attempt 0
	I0425 12:23:26.832228    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:26.832322    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:26.833200    5272 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:23:26.833269    5272 main.go:141] libmachine: (multinode-034000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0425 12:23:26.833286    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:23:26.833303    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:23:26.833323    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:23:26.833330    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:23:26.833336    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:23:26.833342    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:23:26.833354    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:23:26.833368    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:23:26.833386    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:23:26.833394    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:23:26.833409    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:23:26.833423    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:23:26.833431    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:23:26.833440    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:23:26.839410    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:23:26.891836    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:23:26.892440    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:23:26.892461    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:23:26.892478    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:23:26.892491    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:23:27.273023    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:23:27.273039    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:23:27.387689    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:23:27.387722    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:23:27.387742    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:23:27.387754    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:23:27.388597    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:23:27.388610    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:23:28.834136    5272 main.go:141] libmachine: (multinode-034000) DBG | Attempt 1
	I0425 12:23:28.834156    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:28.834279    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:28.835065    5272 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:23:28.835127    5272 main.go:141] libmachine: (multinode-034000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0425 12:23:28.835140    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:23:28.835154    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:23:28.835160    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:23:28.835167    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:23:28.835174    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:23:28.835187    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:23:28.835195    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:23:28.835201    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:23:28.835207    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:23:28.835218    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:23:28.835231    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:23:28.835245    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:23:28.835263    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:23:28.835271    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:23:30.836633    5272 main.go:141] libmachine: (multinode-034000) DBG | Attempt 2
	I0425 12:23:30.836650    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:30.836688    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:30.837542    5272 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:23:30.837568    5272 main.go:141] libmachine: (multinode-034000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0425 12:23:30.837579    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:23:30.837601    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:23:30.837609    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:23:30.837616    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:23:30.837624    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:23:30.837639    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:23:30.837665    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:23:30.837673    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:23:30.837682    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:23:30.837691    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:23:30.837698    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:23:30.837705    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:23:30.837713    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:23:30.837719    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:23:32.630687    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0425 12:23:32.630716    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0425 12:23:32.630725    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0425 12:23:32.654268    5272 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:23:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0425 12:23:32.839110    5272 main.go:141] libmachine: (multinode-034000) DBG | Attempt 3
	I0425 12:23:32.839133    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:32.839325    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:32.840727    5272 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:23:32.840801    5272 main.go:141] libmachine: (multinode-034000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0425 12:23:32.840814    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:23:32.840825    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:23:32.840835    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:23:32.840849    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:23:32.840861    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:23:32.840870    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:23:32.840895    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:23:32.840906    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:23:32.840915    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:23:32.840926    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:23:32.840935    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:23:32.840946    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:23:32.840955    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:23:32.840965    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:23:34.842706    5272 main.go:141] libmachine: (multinode-034000) DBG | Attempt 4
	I0425 12:23:34.842726    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:34.842851    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:34.843717    5272 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:23:34.843759    5272 main.go:141] libmachine: (multinode-034000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0425 12:23:34.843817    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:23:34.843829    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:23:34.843840    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:23:34.843848    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:23:34.843866    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:23:34.843882    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:23:34.843891    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:23:34.843900    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:23:34.843907    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:23:34.843915    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:23:34.843933    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:23:34.843946    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:23:34.843999    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:23:34.844007    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:23:36.845602    5272 main.go:141] libmachine: (multinode-034000) DBG | Attempt 5
	I0425 12:23:36.845618    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:36.845687    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:36.846474    5272 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:23:36.846526    5272 main.go:141] libmachine: (multinode-034000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0425 12:23:36.846539    5272 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:23:36.846547    5272 main.go:141] libmachine: (multinode-034000) DBG | Found match: 1e:d3:c3:87:d3:c7
	I0425 12:23:36.846552    5272 main.go:141] libmachine: (multinode-034000) DBG | IP: 192.169.0.16
	I0425 12:23:36.846673    5272 main.go:141] libmachine: (multinode-034000) Calling .GetConfigRaw
	I0425 12:23:36.847291    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:36.847397    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:36.847493    5272 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 12:23:36.847500    5272 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:23:36.847588    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:23:36.847637    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:23:36.848392    5272 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 12:23:36.848403    5272 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 12:23:36.848410    5272 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 12:23:36.848438    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:36.848627    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:36.848763    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:36.848887    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:36.849079    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:36.849267    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:36.849501    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:36.849509    5272 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 12:23:36.868322    5272 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0425 12:23:39.924896    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:23:39.924908    5272 main.go:141] libmachine: Detecting the provisioner...
	I0425 12:23:39.924913    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:39.925044    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:39.925135    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:39.925235    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:39.925338    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:39.925468    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:39.925607    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:39.925619    5272 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 12:23:39.983447    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 12:23:39.983504    5272 main.go:141] libmachine: found compatible host: buildroot
	I0425 12:23:39.983510    5272 main.go:141] libmachine: Provisioning with buildroot...
	I0425 12:23:39.983518    5272 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:23:39.983674    5272 buildroot.go:166] provisioning hostname "multinode-034000"
	I0425 12:23:39.983685    5272 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:23:39.983777    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:39.983869    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:39.983959    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:39.984071    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:39.984159    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:39.984282    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:39.984417    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:39.984426    5272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000 && echo "multinode-034000" | sudo tee /etc/hostname
	I0425 12:23:40.052758    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000
	
	I0425 12:23:40.052778    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:40.052907    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:40.053009    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.053100    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.053195    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:40.053316    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:40.053469    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:40.053480    5272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:23:40.115894    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:23:40.115922    5272 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:23:40.115945    5272 buildroot.go:174] setting up certificates
	I0425 12:23:40.115955    5272 provision.go:84] configureAuth start
	I0425 12:23:40.115963    5272 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:23:40.116094    5272 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:23:40.116210    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:40.116293    5272 provision.go:143] copyHostCerts
	I0425 12:23:40.116326    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:23:40.116395    5272 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:23:40.116403    5272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:23:40.116573    5272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:23:40.116787    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:23:40.116828    5272 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:23:40.116833    5272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:23:40.116918    5272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:23:40.117051    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:23:40.117095    5272 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:23:40.117105    5272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:23:40.117191    5272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:23:40.117325    5272 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-034000]
	I0425 12:23:40.201322    5272 provision.go:177] copyRemoteCerts
	I0425 12:23:40.201378    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:23:40.201396    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:40.201534    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:40.201628    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.201708    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:40.201786    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:23:40.237938    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:23:40.238017    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0425 12:23:40.256903    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:23:40.256963    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 12:23:40.275805    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:23:40.275861    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:23:40.295597    5272 provision.go:87] duration metric: took 179.623509ms to configureAuth
	I0425 12:23:40.295611    5272 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:23:40.295769    5272 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:23:40.295782    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:40.295917    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:40.296012    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:40.296095    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.296181    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.296268    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:40.296391    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:40.296513    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:40.296521    5272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:23:40.357185    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:23:40.357198    5272 buildroot.go:70] root file system type: tmpfs
	I0425 12:23:40.357272    5272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:23:40.357287    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:40.357422    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:40.357522    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.357630    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.357724    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:40.357875    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:40.358014    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:40.358055    5272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:23:40.429250    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:23:40.429275    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:40.429402    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:40.429486    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.429593    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:40.429678    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:40.429805    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:40.429959    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:40.429970    5272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:23:41.934882    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:23:41.934897    5272 main.go:141] libmachine: Checking connection to Docker...
	I0425 12:23:41.934904    5272 main.go:141] libmachine: (multinode-034000) Calling .GetURL
	I0425 12:23:41.935041    5272 main.go:141] libmachine: Docker is up and running!
	I0425 12:23:41.935049    5272 main.go:141] libmachine: Reticulating splines...
	I0425 12:23:41.935054    5272 client.go:171] duration metric: took 15.904278289s to LocalClient.Create
	I0425 12:23:41.935094    5272 start.go:167] duration metric: took 15.904344407s to libmachine.API.Create "multinode-034000"
	I0425 12:23:41.935102    5272 start.go:293] postStartSetup for "multinode-034000" (driver="hyperkit")
	I0425 12:23:41.935108    5272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:23:41.935140    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:41.935291    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:23:41.935304    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:41.935391    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:41.935472    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:41.935567    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:41.935665    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:23:41.973765    5272 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:23:41.977587    5272 command_runner.go:130] > NAME=Buildroot
	I0425 12:23:41.977599    5272 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 12:23:41.977602    5272 command_runner.go:130] > ID=buildroot
	I0425 12:23:41.977606    5272 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 12:23:41.977610    5272 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 12:23:41.978047    5272 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:23:41.978061    5272 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:23:41.978170    5272 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:23:41.978350    5272 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:23:41.978362    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:23:41.978581    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:23:41.988710    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:23:42.017559    5272 start.go:296] duration metric: took 82.446768ms for postStartSetup
	I0425 12:23:42.017584    5272 main.go:141] libmachine: (multinode-034000) Calling .GetConfigRaw
	I0425 12:23:42.018173    5272 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:23:42.018330    5272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:23:42.018671    5272 start.go:128] duration metric: took 16.020504147s to createHost
	I0425 12:23:42.018684    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:42.018773    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:42.018864    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:42.018929    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:42.018993    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:42.019090    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:23:42.019212    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:23:42.019218    5272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 12:23:42.075052    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073022.181874250
	
	I0425 12:23:42.075064    5272 fix.go:216] guest clock: 1714073022.181874250
	I0425 12:23:42.075073    5272 fix.go:229] Guest: 2024-04-25 12:23:42.18187425 -0700 PDT Remote: 2024-04-25 12:23:42.018679 -0700 PDT m=+16.456627621 (delta=163.19525ms)
	I0425 12:23:42.075086    5272 fix.go:200] guest clock delta is within tolerance: 163.19525ms
	I0425 12:23:42.075090    5272 start.go:83] releasing machines lock for "multinode-034000", held for 16.077017982s
	I0425 12:23:42.075108    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:42.075265    5272 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:23:42.075356    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:42.075644    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:42.075740    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:23:42.075807    5272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:23:42.075834    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:42.075860    5272 ssh_runner.go:195] Run: cat /version.json
	I0425 12:23:42.075871    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:23:42.075922    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:42.075955    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:23:42.076014    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:42.076038    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:23:42.076090    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:42.076123    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:23:42.076208    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:23:42.076222    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:23:42.107527    5272 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0425 12:23:42.107710    5272 ssh_runner.go:195] Run: systemctl --version
	I0425 12:23:42.155355    5272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 12:23:42.156151    5272 command_runner.go:130] > systemd 252 (252)
	I0425 12:23:42.156185    5272 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0425 12:23:42.156295    5272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 12:23:42.161414    5272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 12:23:42.161434    5272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:23:42.161469    5272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:23:42.173635    5272 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0425 12:23:42.173655    5272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:23:42.173662    5272 start.go:494] detecting cgroup driver to use...
	I0425 12:23:42.173766    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:23:42.188375    5272 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0425 12:23:42.188649    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:23:42.197919    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:23:42.206433    5272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:23:42.206473    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:23:42.214780    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:23:42.223140    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:23:42.231266    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:23:42.239477    5272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:23:42.252562    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:23:42.261502    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:23:42.270519    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:23:42.280034    5272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:23:42.288006    5272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 12:23:42.288171    5272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:23:42.296416    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:23:42.389867    5272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:23:42.408338    5272 start.go:494] detecting cgroup driver to use...
	I0425 12:23:42.408416    5272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:23:42.422694    5272 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0425 12:23:42.422855    5272 command_runner.go:130] > [Unit]
	I0425 12:23:42.422864    5272 command_runner.go:130] > Description=Docker Application Container Engine
	I0425 12:23:42.422869    5272 command_runner.go:130] > Documentation=https://docs.docker.com
	I0425 12:23:42.422874    5272 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0425 12:23:42.422878    5272 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0425 12:23:42.422885    5272 command_runner.go:130] > StartLimitBurst=3
	I0425 12:23:42.422890    5272 command_runner.go:130] > StartLimitIntervalSec=60
	I0425 12:23:42.422893    5272 command_runner.go:130] > [Service]
	I0425 12:23:42.422896    5272 command_runner.go:130] > Type=notify
	I0425 12:23:42.422899    5272 command_runner.go:130] > Restart=on-failure
	I0425 12:23:42.422905    5272 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0425 12:23:42.422921    5272 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0425 12:23:42.422929    5272 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0425 12:23:42.422935    5272 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0425 12:23:42.422941    5272 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0425 12:23:42.422946    5272 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0425 12:23:42.422952    5272 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0425 12:23:42.422962    5272 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0425 12:23:42.422969    5272 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0425 12:23:42.422973    5272 command_runner.go:130] > ExecStart=
	I0425 12:23:42.422984    5272 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0425 12:23:42.422989    5272 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0425 12:23:42.422995    5272 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0425 12:23:42.423002    5272 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0425 12:23:42.423009    5272 command_runner.go:130] > LimitNOFILE=infinity
	I0425 12:23:42.423013    5272 command_runner.go:130] > LimitNPROC=infinity
	I0425 12:23:42.423017    5272 command_runner.go:130] > LimitCORE=infinity
	I0425 12:23:42.423027    5272 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0425 12:23:42.423034    5272 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0425 12:23:42.423039    5272 command_runner.go:130] > TasksMax=infinity
	I0425 12:23:42.423043    5272 command_runner.go:130] > TimeoutStartSec=0
	I0425 12:23:42.423048    5272 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0425 12:23:42.423052    5272 command_runner.go:130] > Delegate=yes
	I0425 12:23:42.423056    5272 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0425 12:23:42.423060    5272 command_runner.go:130] > KillMode=process
	I0425 12:23:42.423063    5272 command_runner.go:130] > [Install]
	I0425 12:23:42.423072    5272 command_runner.go:130] > WantedBy=multi-user.target
	I0425 12:23:42.423210    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:23:42.436218    5272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:23:42.450499    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:23:42.460848    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:23:42.470997    5272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:23:42.505154    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:23:42.515620    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:23:42.530939    5272 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0425 12:23:42.531184    5272 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:23:42.533995    5272 command_runner.go:130] > /usr/bin/cri-dockerd
	I0425 12:23:42.534058    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:23:42.541068    5272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:23:42.556505    5272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:23:42.662930    5272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:23:42.760020    5272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:23:42.760105    5272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:23:42.773995    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:23:42.879042    5272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:23:45.271108    5272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.391970571s)
	I0425 12:23:45.271173    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0425 12:23:45.281562    5272 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0425 12:23:45.294279    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:23:45.304551    5272 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0425 12:23:45.397017    5272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0425 12:23:45.496006    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:23:45.618879    5272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0425 12:23:45.646792    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:23:45.658129    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:23:45.760235    5272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0425 12:23:45.817590    5272 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0425 12:23:45.817671    5272 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0425 12:23:45.821894    5272 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0425 12:23:45.821910    5272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 12:23:45.821915    5272 command_runner.go:130] > Device: 0,22	Inode: 809         Links: 1
	I0425 12:23:45.821920    5272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0425 12:23:45.821924    5272 command_runner.go:130] > Access: 2024-04-25 19:23:45.878525750 +0000
	I0425 12:23:45.821934    5272 command_runner.go:130] > Modify: 2024-04-25 19:23:45.878525750 +0000
	I0425 12:23:45.821941    5272 command_runner.go:130] > Change: 2024-04-25 19:23:45.880525242 +0000
	I0425 12:23:45.821946    5272 command_runner.go:130] >  Birth: -
	I0425 12:23:45.822161    5272 start.go:562] Will wait 60s for crictl version
	I0425 12:23:45.822207    5272 ssh_runner.go:195] Run: which crictl
	I0425 12:23:45.825469    5272 command_runner.go:130] > /usr/bin/crictl
	I0425 12:23:45.825512    5272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 12:23:45.853589    5272 command_runner.go:130] > Version:  0.1.0
	I0425 12:23:45.853617    5272 command_runner.go:130] > RuntimeName:  docker
	I0425 12:23:45.853720    5272 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0425 12:23:45.853781    5272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 12:23:45.854935    5272 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0425 12:23:45.855011    5272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:23:45.870087    5272 command_runner.go:130] > 26.0.2
	I0425 12:23:45.870267    5272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:23:45.886243    5272 command_runner.go:130] > 26.0.2
	I0425 12:23:45.909653    5272 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0425 12:23:45.909679    5272 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:23:45.909970    5272 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0425 12:23:45.912946    5272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:23:45.922371    5272 kubeadm.go:877] updating cluster {Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 12:23:45.922438    5272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:23:45.922498    5272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0425 12:23:45.933210    5272 docker.go:685] Got preloaded images: 
	I0425 12:23:45.933222    5272 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0425 12:23:45.933293    5272 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0425 12:23:45.940625    5272 command_runner.go:139] > {"Repositories":{}}
	I0425 12:23:45.940719    5272 ssh_runner.go:195] Run: which lz4
	I0425 12:23:45.943423    5272 command_runner.go:130] > /usr/bin/lz4
	I0425 12:23:45.943513    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0425 12:23:45.943643    5272 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 12:23:45.946927    5272 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 12:23:45.946950    5272 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 12:23:45.946973    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0425 12:23:47.081972    5272 docker.go:649] duration metric: took 1.138339751s to copy over tarball
	I0425 12:23:47.082038    5272 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 12:23:49.889245    5272 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.807106377s)
	I0425 12:23:49.889260    5272 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 12:23:49.917988    5272 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0425 12:23:49.926348    5272 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0425 12:23:49.926435    5272 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0425 12:23:49.940769    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:23:50.044384    5272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:23:52.322681    5272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.278207047s)
	I0425 12:23:52.322777    5272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0425 12:23:52.334307    5272 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0425 12:23:52.334321    5272 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 12:23:52.334325    5272 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0425 12:23:52.334329    5272 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0425 12:23:52.334333    5272 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0425 12:23:52.334337    5272 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0425 12:23:52.334340    5272 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0425 12:23:52.334345    5272 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 12:23:52.334760    5272 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0425 12:23:52.334778    5272 cache_images.go:84] Images are preloaded, skipping loading
	I0425 12:23:52.334787    5272 kubeadm.go:928] updating node { 192.169.0.16 8443 v1.30.0 docker true true} ...
	I0425 12:23:52.334867    5272 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-034000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 12:23:52.334950    5272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0425 12:23:52.352023    5272 command_runner.go:130] > cgroupfs
	I0425 12:23:52.352625    5272 cni.go:84] Creating CNI manager for ""
	I0425 12:23:52.352636    5272 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 12:23:52.352646    5272 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 12:23:52.352659    5272 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.16 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-034000 NodeName:multinode-034000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 12:23:52.352763    5272 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-034000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 12:23:52.352828    5272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 12:23:52.360270    5272 command_runner.go:130] > kubeadm
	I0425 12:23:52.360279    5272 command_runner.go:130] > kubectl
	I0425 12:23:52.360283    5272 command_runner.go:130] > kubelet
	I0425 12:23:52.360297    5272 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 12:23:52.360346    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 12:23:52.367436    5272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0425 12:23:52.380716    5272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 12:23:52.394411    5272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0425 12:23:52.409308    5272 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0425 12:23:52.412166    5272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:23:52.422119    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:23:52.520621    5272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:23:52.535259    5272 certs.go:68] Setting up /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000 for IP: 192.169.0.16
	I0425 12:23:52.535271    5272 certs.go:194] generating shared ca certs ...
	I0425 12:23:52.535291    5272 certs.go:226] acquiring lock for ca certs: {Name:mk1f3cabc8bfb1fa57eb09572b98c6852173235a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:52.535473    5272 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key
	I0425 12:23:52.535546    5272 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key
	I0425 12:23:52.535557    5272 certs.go:256] generating profile certs ...
	I0425 12:23:52.535607    5272 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key
	I0425 12:23:52.535620    5272 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt with IP's: []
	I0425 12:23:52.789827    5272 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt ...
	I0425 12:23:52.789847    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt: {Name:mkc59355f4da0c1696313162dae5ee0a09521387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:52.790141    5272 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key ...
	I0425 12:23:52.790149    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key: {Name:mkdd2752dfef133e443bcb87f06f9d3c25afa1d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:52.790366    5272 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key.b4ea36e9
	I0425 12:23:52.790381    5272 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt.b4ea36e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.16]
	I0425 12:23:52.867613    5272 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt.b4ea36e9 ...
	I0425 12:23:52.867627    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt.b4ea36e9: {Name:mkdbeb74d89023e320a0d6c79f4852d33c7eb85d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:52.867898    5272 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key.b4ea36e9 ...
	I0425 12:23:52.867907    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key.b4ea36e9: {Name:mk3a318724729b4796d82f00c784824661c43bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:52.868105    5272 certs.go:381] copying /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt.b4ea36e9 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt
	I0425 12:23:52.868276    5272 certs.go:385] copying /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key.b4ea36e9 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key
	I0425 12:23:52.868437    5272 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key
	I0425 12:23:52.868455    5272 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt with IP's: []
	I0425 12:23:53.025177    5272 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt ...
	I0425 12:23:53.025197    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt: {Name:mkd3439536f4291e4a8569e821d867ce6ede891e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:53.025488    5272 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key ...
	I0425 12:23:53.025503    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key: {Name:mkad509af85fbb1678fdca9f37bcf628c08136b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:23:53.025728    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 12:23:53.025757    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 12:23:53.025777    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 12:23:53.025797    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 12:23:53.025820    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 12:23:53.025840    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 12:23:53.025859    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 12:23:53.025877    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 12:23:53.025982    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem (1338 bytes)
	W0425 12:23:53.026032    5272 certs.go:480] ignoring /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885_empty.pem, impossibly tiny 0 bytes
	I0425 12:23:53.026041    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 12:23:53.026071    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem (1078 bytes)
	I0425 12:23:53.026102    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem (1123 bytes)
	I0425 12:23:53.026133    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem (1675 bytes)
	I0425 12:23:53.026198    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:23:53.026232    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:23:53.026254    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem -> /usr/share/ca-certificates/1885.pem
	I0425 12:23:53.026272    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /usr/share/ca-certificates/18852.pem
	I0425 12:23:53.026712    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 12:23:53.046978    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 12:23:53.067105    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 12:23:53.087175    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 12:23:53.106951    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 12:23:53.126779    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 12:23:53.146418    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 12:23:53.165318    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 12:23:53.185345    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 12:23:53.208118    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem --> /usr/share/ca-certificates/1885.pem (1338 bytes)
	I0425 12:23:53.237310    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /usr/share/ca-certificates/18852.pem (1708 bytes)
	I0425 12:23:53.262701    5272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 12:23:53.276128    5272 ssh_runner.go:195] Run: openssl version
	I0425 12:23:53.280102    5272 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 12:23:53.280315    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 12:23:53.288550    5272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:23:53.291964    5272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:23:53.292001    5272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:23:53.292040    5272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:23:53.296166    5272 command_runner.go:130] > b5213941
	I0425 12:23:53.296271    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 12:23:53.304959    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1885.pem && ln -fs /usr/share/ca-certificates/1885.pem /etc/ssl/certs/1885.pem"
	I0425 12:23:53.313595    5272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1885.pem
	I0425 12:23:53.316987    5272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:23:53.317097    5272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:23:53.317148    5272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1885.pem
	I0425 12:23:53.321266    5272 command_runner.go:130] > 51391683
	I0425 12:23:53.321454    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1885.pem /etc/ssl/certs/51391683.0"
	I0425 12:23:53.329745    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18852.pem && ln -fs /usr/share/ca-certificates/18852.pem /etc/ssl/certs/18852.pem"
	I0425 12:23:53.338097    5272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18852.pem
	I0425 12:23:53.341440    5272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:23:53.341584    5272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:23:53.341620    5272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18852.pem
	I0425 12:23:53.345577    5272 command_runner.go:130] > 3ec20f2e
	I0425 12:23:53.345770    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18852.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 12:23:53.353995    5272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 12:23:53.357210    5272 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:23:53.357226    5272 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:23:53.357269    5272 kubeadm.go:391] StartCluster: {Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:23:53.357356    5272 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0425 12:23:53.368150    5272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 12:23:53.375403    5272 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0425 12:23:53.375417    5272 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0425 12:23:53.375422    5272 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0425 12:23:53.375550    5272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 12:23:53.382880    5272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 12:23:53.391899    5272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0425 12:23:53.391912    5272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0425 12:23:53.391918    5272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0425 12:23:53.391924    5272 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 12:23:53.391948    5272 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 12:23:53.391954    5272 kubeadm.go:156] found existing configuration files:
	
	I0425 12:23:53.391998    5272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 12:23:53.399963    5272 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 12:23:53.399982    5272 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 12:23:53.400022    5272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 12:23:53.408112    5272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 12:23:53.415989    5272 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 12:23:53.416006    5272 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 12:23:53.416043    5272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 12:23:53.424077    5272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 12:23:53.431785    5272 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 12:23:53.431806    5272 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 12:23:53.431841    5272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 12:23:53.439971    5272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 12:23:53.447797    5272 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 12:23:53.447817    5272 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 12:23:53.447851    5272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 12:23:53.455993    5272 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 12:23:53.520901    5272 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 12:23:53.520914    5272 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0425 12:23:53.520962    5272 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 12:23:53.520969    5272 command_runner.go:130] > [preflight] Running pre-flight checks
	I0425 12:23:53.610264    5272 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 12:23:53.610281    5272 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 12:23:53.610363    5272 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 12:23:53.610373    5272 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 12:23:53.610455    5272 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 12:23:53.610465    5272 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 12:23:53.762070    5272 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 12:23:53.762087    5272 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 12:23:53.795090    5272 out.go:204]   - Generating certificates and keys ...
	I0425 12:23:53.795166    5272 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0425 12:23:53.795176    5272 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 12:23:53.795243    5272 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0425 12:23:53.795253    5272 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 12:23:54.000054    5272 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 12:23:54.000062    5272 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 12:23:54.176711    5272 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 12:23:54.176719    5272 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0425 12:23:54.466740    5272 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 12:23:54.466781    5272 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0425 12:23:54.563556    5272 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 12:23:54.563564    5272 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0425 12:23:54.870857    5272 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 12:23:54.870873    5272 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0425 12:23:54.871074    5272 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-034000] and IPs [192.169.0.16 127.0.0.1 ::1]
	I0425 12:23:54.871084    5272 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-034000] and IPs [192.169.0.16 127.0.0.1 ::1]
	I0425 12:23:55.845824    5272 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 12:23:55.845828    5272 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0425 12:23:55.845941    5272 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-034000] and IPs [192.169.0.16 127.0.0.1 ::1]
	I0425 12:23:55.845946    5272 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-034000] and IPs [192.169.0.16 127.0.0.1 ::1]
	I0425 12:23:56.356574    5272 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 12:23:56.356580    5272 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 12:23:56.530775    5272 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 12:23:56.530781    5272 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 12:23:56.925195    5272 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 12:23:56.925213    5272 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0425 12:23:56.925394    5272 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 12:23:56.925405    5272 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 12:23:57.037677    5272 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 12:23:57.037695    5272 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 12:23:57.215043    5272 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 12:23:57.215060    5272 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 12:23:57.334818    5272 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 12:23:57.334838    5272 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 12:23:57.422493    5272 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 12:23:57.422508    5272 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 12:23:57.951263    5272 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 12:23:57.951278    5272 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 12:23:57.953065    5272 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 12:23:57.953077    5272 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 12:23:57.954772    5272 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 12:23:57.988404    5272 out.go:204]   - Booting up control plane ...
	I0425 12:23:57.954829    5272 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 12:23:57.988504    5272 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 12:23:57.988516    5272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 12:23:57.988593    5272 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 12:23:57.988601    5272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 12:23:57.988666    5272 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 12:23:57.988672    5272 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 12:23:57.988762    5272 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 12:23:57.988770    5272 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 12:23:57.988843    5272 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 12:23:57.988847    5272 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 12:23:57.988881    5272 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 12:23:57.988887    5272 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0425 12:23:58.076518    5272 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 12:23:58.076542    5272 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 12:23:58.076620    5272 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 12:23:58.076625    5272 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 12:23:58.578765    5272 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.250903ms
	I0425 12:23:58.578806    5272 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.250903ms
	I0425 12:23:58.578962    5272 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 12:23:58.578975    5272 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 12:24:02.577381    5272 kubeadm.go:309] [api-check] The API server is healthy after 4.00109021s
	I0425 12:24:02.577389    5272 command_runner.go:130] > [api-check] The API server is healthy after 4.00109021s
	I0425 12:24:02.588393    5272 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 12:24:02.588405    5272 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 12:24:02.595002    5272 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 12:24:02.595003    5272 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 12:24:02.607824    5272 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 12:24:02.607842    5272 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0425 12:24:02.608001    5272 kubeadm.go:309] [mark-control-plane] Marking the node multinode-034000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 12:24:02.608010    5272 command_runner.go:130] > [mark-control-plane] Marking the node multinode-034000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 12:24:02.613505    5272 kubeadm.go:309] [bootstrap-token] Using token: kja6uw.bf821eudqko24xwz
	I0425 12:24:02.613521    5272 command_runner.go:130] > [bootstrap-token] Using token: kja6uw.bf821eudqko24xwz
	I0425 12:24:02.654501    5272 out.go:204]   - Configuring RBAC rules ...
	I0425 12:24:02.654625    5272 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 12:24:02.654638    5272 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 12:24:02.656550    5272 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 12:24:02.656559    5272 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 12:24:02.685590    5272 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 12:24:02.685602    5272 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 12:24:02.687649    5272 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 12:24:02.687654    5272 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 12:24:02.689733    5272 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 12:24:02.689740    5272 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 12:24:02.692253    5272 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 12:24:02.692264    5272 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 12:24:02.981606    5272 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 12:24:02.981614    5272 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 12:24:03.395166    5272 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 12:24:03.395181    5272 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0425 12:24:03.984249    5272 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 12:24:03.984267    5272 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0425 12:24:03.984832    5272 kubeadm.go:309] 
	I0425 12:24:03.984879    5272 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 12:24:03.984884    5272 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0425 12:24:03.984891    5272 kubeadm.go:309] 
	I0425 12:24:03.984944    5272 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 12:24:03.984951    5272 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0425 12:24:03.984963    5272 kubeadm.go:309] 
	I0425 12:24:03.985009    5272 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 12:24:03.985015    5272 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0425 12:24:03.985062    5272 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 12:24:03.985077    5272 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 12:24:03.985123    5272 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 12:24:03.985127    5272 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 12:24:03.985134    5272 kubeadm.go:309] 
	I0425 12:24:03.985176    5272 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 12:24:03.985184    5272 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0425 12:24:03.985201    5272 kubeadm.go:309] 
	I0425 12:24:03.985234    5272 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 12:24:03.985239    5272 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 12:24:03.985248    5272 kubeadm.go:309] 
	I0425 12:24:03.985288    5272 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 12:24:03.985293    5272 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0425 12:24:03.985369    5272 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 12:24:03.985380    5272 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 12:24:03.985455    5272 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 12:24:03.985461    5272 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 12:24:03.985472    5272 kubeadm.go:309] 
	I0425 12:24:03.985535    5272 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 12:24:03.985539    5272 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0425 12:24:03.985603    5272 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 12:24:03.985608    5272 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0425 12:24:03.985611    5272 kubeadm.go:309] 
	I0425 12:24:03.985693    5272 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kja6uw.bf821eudqko24xwz \
	I0425 12:24:03.985702    5272 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token kja6uw.bf821eudqko24xwz \
	I0425 12:24:03.985781    5272 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 \
	I0425 12:24:03.985794    5272 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 \
	I0425 12:24:03.985812    5272 kubeadm.go:309] 	--control-plane 
	I0425 12:24:03.985817    5272 command_runner.go:130] > 	--control-plane 
	I0425 12:24:03.985827    5272 kubeadm.go:309] 
	I0425 12:24:03.985901    5272 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 12:24:03.985910    5272 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0425 12:24:03.985916    5272 kubeadm.go:309] 
	I0425 12:24:03.985981    5272 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kja6uw.bf821eudqko24xwz \
	I0425 12:24:03.985987    5272 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token kja6uw.bf821eudqko24xwz \
	I0425 12:24:03.986072    5272 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 
	I0425 12:24:03.986078    5272 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 
	I0425 12:24:03.986612    5272 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 12:24:03.986617    5272 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 12:24:03.986626    5272 cni.go:84] Creating CNI manager for ""
	I0425 12:24:03.986630    5272 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 12:24:04.046458    5272 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0425 12:24:04.067517    5272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0425 12:24:04.072436    5272 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0425 12:24:04.072450    5272 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0425 12:24:04.072456    5272 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0425 12:24:04.072461    5272 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 12:24:04.072465    5272 command_runner.go:130] > Access: 2024-04-25 19:23:36.580310185 +0000
	I0425 12:24:04.072469    5272 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0425 12:24:04.072477    5272 command_runner.go:130] > Change: 2024-04-25 19:23:34.592943030 +0000
	I0425 12:24:04.072488    5272 command_runner.go:130] >  Birth: -
	I0425 12:24:04.072564    5272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0425 12:24:04.072573    5272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0425 12:24:04.087541    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0425 12:24:04.260565    5272 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0425 12:24:04.264038    5272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0425 12:24:04.268269    5272 command_runner.go:130] > serviceaccount/kindnet created
	I0425 12:24:04.273407    5272 command_runner.go:130] > daemonset.apps/kindnet created
	I0425 12:24:04.274870    5272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 12:24:04.274934    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:04.274941    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-034000 minikube.k8s.io/updated_at=2024_04_25T12_24_04_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=multinode-034000 minikube.k8s.io/primary=true
	I0425 12:24:04.401574    5272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0425 12:24:04.403334    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:04.428008    5272 command_runner.go:130] > node/multinode-034000 labeled
	I0425 12:24:04.428913    5272 command_runner.go:130] > -16
	I0425 12:24:04.428926    5272 ops.go:34] apiserver oom_adj: -16
	I0425 12:24:04.470619    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:04.903515    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:04.963250    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:05.403556    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:05.467791    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:05.903627    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:05.963990    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:06.403437    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:06.464557    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:06.904760    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:06.966145    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:07.403592    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:07.460464    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:07.903741    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:07.962775    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:08.404275    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:08.458808    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:08.905160    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:08.963371    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:09.403931    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:09.460487    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:09.904439    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:09.964188    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:10.404076    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:10.472855    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:10.904651    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:10.968869    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:11.403714    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:11.461158    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:11.904130    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:11.963740    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:12.404012    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:12.469627    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:12.904148    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:12.969785    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:13.404188    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:13.469415    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:13.904675    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:13.969758    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:14.404856    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:14.470796    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:14.904404    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:14.965232    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:15.404070    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:15.476300    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:15.903778    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:15.966160    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:16.403791    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:16.470031    5272 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0425 12:24:16.903912    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 12:24:16.965284    5272 command_runner.go:130] > NAME      SECRETS   AGE
	I0425 12:24:16.965297    5272 command_runner.go:130] > default   0         1s
	I0425 12:24:16.965310    5272 kubeadm.go:1107] duration metric: took 12.690046832s to wait for elevateKubeSystemPrivileges
	W0425 12:24:16.965331    5272 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 12:24:16.965336    5272 kubeadm.go:393] duration metric: took 23.607363756s to StartCluster
	I0425 12:24:16.965347    5272 settings.go:142] acquiring lock: {Name:mk8a221f9e3ce6c550df0488a0a92b106f308663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:24:16.965438    5272 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:24:16.965899    5272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/kubeconfig: {Name:mk225259838427b91a16bb598157785cd2bcef65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:24:16.966162    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0425 12:24:16.966194    5272 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 12:24:17.026715    5272 out.go:177] * Verifying Kubernetes components...
	I0425 12:24:16.966202    5272 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 12:24:16.966301    5272 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:24:17.063920    5272 addons.go:69] Setting storage-provisioner=true in profile "multinode-034000"
	I0425 12:24:17.063966    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:17.063977    5272 addons.go:69] Setting default-storageclass=true in profile "multinode-034000"
	I0425 12:24:17.063969    5272 addons.go:234] Setting addon storage-provisioner=true in "multinode-034000"
	I0425 12:24:17.064035    5272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-034000"
	I0425 12:24:17.064056    5272 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:24:17.064530    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:17.064551    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:17.064571    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:17.064588    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:17.071179    5272 command_runner.go:130] > apiVersion: v1
	I0425 12:24:17.071206    5272 command_runner.go:130] > data:
	I0425 12:24:17.071210    5272 command_runner.go:130] >   Corefile: |
	I0425 12:24:17.071214    5272 command_runner.go:130] >     .:53 {
	I0425 12:24:17.071217    5272 command_runner.go:130] >         errors
	I0425 12:24:17.071222    5272 command_runner.go:130] >         health {
	I0425 12:24:17.071226    5272 command_runner.go:130] >            lameduck 5s
	I0425 12:24:17.071229    5272 command_runner.go:130] >         }
	I0425 12:24:17.071232    5272 command_runner.go:130] >         ready
	I0425 12:24:17.071252    5272 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0425 12:24:17.071256    5272 command_runner.go:130] >            pods insecure
	I0425 12:24:17.071260    5272 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0425 12:24:17.071268    5272 command_runner.go:130] >            ttl 30
	I0425 12:24:17.071272    5272 command_runner.go:130] >         }
	I0425 12:24:17.071279    5272 command_runner.go:130] >         prometheus :9153
	I0425 12:24:17.071283    5272 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0425 12:24:17.071289    5272 command_runner.go:130] >            max_concurrent 1000
	I0425 12:24:17.071293    5272 command_runner.go:130] >         }
	I0425 12:24:17.071296    5272 command_runner.go:130] >         cache 30
	I0425 12:24:17.071300    5272 command_runner.go:130] >         loop
	I0425 12:24:17.071303    5272 command_runner.go:130] >         reload
	I0425 12:24:17.071309    5272 command_runner.go:130] >         loadbalance
	I0425 12:24:17.071313    5272 command_runner.go:130] >     }
	I0425 12:24:17.071316    5272 command_runner.go:130] > kind: ConfigMap
	I0425 12:24:17.071319    5272 command_runner.go:130] > metadata:
	I0425 12:24:17.071350    5272 command_runner.go:130] >   creationTimestamp: "2024-04-25T19:24:03Z"
	I0425 12:24:17.071355    5272 command_runner.go:130] >   name: coredns
	I0425 12:24:17.071358    5272 command_runner.go:130] >   namespace: kube-system
	I0425 12:24:17.071365    5272 command_runner.go:130] >   resourceVersion: "230"
	I0425 12:24:17.071372    5272 command_runner.go:130] >   uid: e3564ac9-0127-4969-9996-df4ae1f92cb4
	I0425 12:24:17.072143    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0425 12:24:17.074218    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52359
	I0425 12:24:17.074229    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52358
	I0425 12:24:17.074593    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:17.074596    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:17.074924    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:17.074935    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:17.074933    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:17.074973    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:17.075156    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:17.075178    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:17.075343    5272 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:24:17.075457    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:17.075517    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:24:17.075561    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:17.075588    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:17.077886    5272 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:24:17.078127    5272 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf373ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:24:17.078530    5272 cert_rotation.go:137] Starting client certificate rotation controller
	I0425 12:24:17.078680    5272 addons.go:234] Setting addon default-storageclass=true in "multinode-034000"
	I0425 12:24:17.078698    5272 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:24:17.078926    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:17.078954    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:17.084845    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52362
	I0425 12:24:17.085204    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:17.085531    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:17.085540    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:17.085774    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:17.085900    5272 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:24:17.085989    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:17.086066    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:24:17.087034    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:24:17.107785    5272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 12:24:17.087858    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52364
	I0425 12:24:17.108227    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:17.128818    5272 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 12:24:17.128832    5272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 12:24:17.128870    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:24:17.129052    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:24:17.129154    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:17.129165    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:24:17.129165    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:17.129286    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:24:17.129381    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:24:17.129517    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:17.129961    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:17.129987    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:17.138943    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52367
	I0425 12:24:17.139312    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:17.139664    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:17.139682    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:17.139891    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:17.140053    5272 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:24:17.140138    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:17.140233    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:24:17.141202    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:24:17.141386    5272 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 12:24:17.141394    5272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 12:24:17.141403    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:24:17.141518    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:24:17.141636    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:24:17.141735    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:24:17.141843    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:24:17.292021    5272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:24:17.356126    5272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 12:24:17.376177    5272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 12:24:17.438101    5272 command_runner.go:130] > configmap/coredns replaced
	I0425 12:24:17.438144    5272 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0425 12:24:17.438499    5272 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:24:17.438499    5272 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:24:17.438691    5272 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf373ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:24:17.438687    5272 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf373ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:24:17.438973    5272 node_ready.go:35] waiting up to 6m0s for node "multinode-034000" to be "Ready" ...
	I0425 12:24:17.439022    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:17.439022    5272 round_trippers.go:463] GET https://192.169.0.16:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0425 12:24:17.439027    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.439029    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.439035    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.439036    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.439040    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.439043    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.445157    5272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 12:24:17.445168    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.445175    5272 round_trippers.go:580]     Audit-Id: ee49ac94-af0f-4650-944c-cfc0bcc8ff1b
	I0425 12:24:17.445181    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.445193    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.445198    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.445205    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.445207    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:17 GMT
	I0425 12:24:17.445293    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:17.445639    5272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 12:24:17.445648    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.445653    5272 round_trippers.go:580]     Audit-Id: ed223cdd-cc44-43bc-abed-4f807d61a354
	I0425 12:24:17.445656    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.445658    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.445662    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.445672    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.445677    5272 round_trippers.go:580]     Content-Length: 291
	I0425 12:24:17.445684    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:17 GMT
	I0425 12:24:17.445726    5272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"788c1820-76bc-4dc1-b668-c1baaf5ea423","resourceVersion":"353","creationTimestamp":"2024-04-25T19:24:03Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0425 12:24:17.445948    5272 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"788c1820-76bc-4dc1-b668-c1baaf5ea423","resourceVersion":"353","creationTimestamp":"2024-04-25T19:24:03Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0425 12:24:17.445981    5272 round_trippers.go:463] PUT https://192.169.0.16:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0425 12:24:17.445987    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.445993    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.445998    5272 round_trippers.go:473]     Content-Type: application/json
	I0425 12:24:17.446002    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.449120    5272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:24:17.449130    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.449135    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.449139    5272 round_trippers.go:580]     Content-Length: 291
	I0425 12:24:17.449155    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:17 GMT
	I0425 12:24:17.449159    5272 round_trippers.go:580]     Audit-Id: 723b0e13-2397-49fc-a484-4a6d2b3f13c7
	I0425 12:24:17.449184    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.449187    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.449190    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.449204    5272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"788c1820-76bc-4dc1-b668-c1baaf5ea423","resourceVersion":"355","creationTimestamp":"2024-04-25T19:24:03Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0425 12:24:17.747001    5272 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0425 12:24:17.747023    5272 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0425 12:24:17.747033    5272 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0425 12:24:17.747045    5272 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0425 12:24:17.747051    5272 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0425 12:24:17.747055    5272 command_runner.go:130] > pod/storage-provisioner created
	I0425 12:24:17.747095    5272 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0425 12:24:17.747103    5272 main.go:141] libmachine: Making call to close driver server
	I0425 12:24:17.747113    5272 main.go:141] libmachine: (multinode-034000) Calling .Close
	I0425 12:24:17.747120    5272 main.go:141] libmachine: Making call to close driver server
	I0425 12:24:17.747131    5272 main.go:141] libmachine: (multinode-034000) Calling .Close
	I0425 12:24:17.747281    5272 main.go:141] libmachine: Successfully made call to close driver server
	I0425 12:24:17.747305    5272 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 12:24:17.747326    5272 main.go:141] libmachine: (multinode-034000) DBG | Closing plugin on server side
	I0425 12:24:17.747336    5272 main.go:141] libmachine: Making call to close driver server
	I0425 12:24:17.747362    5272 main.go:141] libmachine: (multinode-034000) Calling .Close
	I0425 12:24:17.747369    5272 main.go:141] libmachine: (multinode-034000) DBG | Closing plugin on server side
	I0425 12:24:17.747340    5272 main.go:141] libmachine: Successfully made call to close driver server
	I0425 12:24:17.747387    5272 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 12:24:17.747396    5272 main.go:141] libmachine: Making call to close driver server
	I0425 12:24:17.747401    5272 main.go:141] libmachine: (multinode-034000) Calling .Close
	I0425 12:24:17.747532    5272 main.go:141] libmachine: (multinode-034000) DBG | Closing plugin on server side
	I0425 12:24:17.747552    5272 main.go:141] libmachine: Successfully made call to close driver server
	I0425 12:24:17.747561    5272 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 12:24:17.747563    5272 main.go:141] libmachine: Successfully made call to close driver server
	I0425 12:24:17.747574    5272 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 12:24:17.747604    5272 main.go:141] libmachine: (multinode-034000) DBG | Closing plugin on server side
	I0425 12:24:17.747652    5272 round_trippers.go:463] GET https://192.169.0.16:8443/apis/storage.k8s.io/v1/storageclasses
	I0425 12:24:17.747659    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.747668    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.747674    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.752110    5272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 12:24:17.752121    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.752127    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.752130    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.752134    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.752138    5272 round_trippers.go:580]     Content-Length: 1273
	I0425 12:24:17.752140    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:17 GMT
	I0425 12:24:17.752142    5272 round_trippers.go:580]     Audit-Id: 0a26c34c-4c92-48fe-8d28-77cf3c7d94bc
	I0425 12:24:17.752145    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.752164    5272 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"375"},"items":[{"metadata":{"name":"standard","uid":"b42ec148-0144-424c-ab8f-c2a1a362f787","resourceVersion":"366","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0425 12:24:17.752423    5272 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b42ec148-0144-424c-ab8f-c2a1a362f787","resourceVersion":"366","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0425 12:24:17.752450    5272 round_trippers.go:463] PUT https://192.169.0.16:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0425 12:24:17.752455    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.752461    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.752464    5272 round_trippers.go:473]     Content-Type: application/json
	I0425 12:24:17.752467    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.754488    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:17.754495    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.754500    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.754503    5272 round_trippers.go:580]     Content-Length: 1220
	I0425 12:24:17.754513    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:17 GMT
	I0425 12:24:17.754517    5272 round_trippers.go:580]     Audit-Id: 993c748a-7959-499b-9e64-94301e4ad84f
	I0425 12:24:17.754519    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.754522    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.754524    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.754598    5272 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b42ec148-0144-424c-ab8f-c2a1a362f787","resourceVersion":"366","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0425 12:24:17.754673    5272 main.go:141] libmachine: Making call to close driver server
	I0425 12:24:17.754682    5272 main.go:141] libmachine: (multinode-034000) Calling .Close
	I0425 12:24:17.754845    5272 main.go:141] libmachine: Successfully made call to close driver server
	I0425 12:24:17.754848    5272 main.go:141] libmachine: (multinode-034000) DBG | Closing plugin on server side
	I0425 12:24:17.754852    5272 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 12:24:17.799432    5272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0425 12:24:17.820660    5272 addons.go:505] duration metric: took 854.438358ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0425 12:24:17.940519    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:17.940544    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.940555    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.940562    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.940648    5272 round_trippers.go:463] GET https://192.169.0.16:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0425 12:24:17.940669    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:17.940680    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:17.940688    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:17.943028    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:17.943044    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.943054    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:18 GMT
	I0425 12:24:17.943066    5272 round_trippers.go:580]     Audit-Id: 0bdd207f-ff27-4101-a5e8-2e51c4e13ab5
	I0425 12:24:17.943071    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.943077    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.943095    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.943102    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.943211    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:17.943316    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:17.943327    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:17.943334    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:18 GMT
	I0425 12:24:17.943338    5272 round_trippers.go:580]     Audit-Id: 4e7dc0fd-a3d6-4b48-a23c-f02a6c805df1
	I0425 12:24:17.943342    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:17.943346    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:17.943352    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:17.943355    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:17.943358    5272 round_trippers.go:580]     Content-Length: 291
	I0425 12:24:17.943431    5272 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"788c1820-76bc-4dc1-b668-c1baaf5ea423","resourceVersion":"365","creationTimestamp":"2024-04-25T19:24:03Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0425 12:24:17.943494    5272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-034000" context rescaled to 1 replicas
	I0425 12:24:18.439332    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:18.439351    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:18.439360    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:18.439366    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:18.441707    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:18.441718    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:18.441724    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:18.441732    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:18.441737    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:18.441739    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:18 GMT
	I0425 12:24:18.441742    5272 round_trippers.go:580]     Audit-Id: 503bda06-a78d-4601-a165-02b461aac263
	I0425 12:24:18.441744    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:18.442048    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:18.940899    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:18.940931    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:18.940999    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:18.941008    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:18.943047    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:18.943060    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:18.943068    5272 round_trippers.go:580]     Audit-Id: 561f5e63-cbaa-4827-8554-170d8142627e
	I0425 12:24:18.943076    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:18.943081    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:18.943091    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:18.943097    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:18.943101    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:19 GMT
	I0425 12:24:18.943438    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:19.440269    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:19.440286    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:19.440292    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:19.440298    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:19.442140    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:19.442150    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:19.442154    5272 round_trippers.go:580]     Audit-Id: 67439e10-8fe0-46b8-b140-ac461ccd0108
	I0425 12:24:19.442157    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:19.442161    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:19.442164    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:19.442167    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:19.442169    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:19 GMT
	I0425 12:24:19.442327    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:19.442530    5272 node_ready.go:53] node "multinode-034000" has status "Ready":"False"
	I0425 12:24:19.939982    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:19.939997    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:19.940034    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:19.940039    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:19.941462    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:19.941474    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:19.941494    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:20 GMT
	I0425 12:24:19.941498    5272 round_trippers.go:580]     Audit-Id: db84349f-a872-4ef6-a8ab-f828c1ed68c8
	I0425 12:24:19.941501    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:19.941503    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:19.941505    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:19.941508    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:19.941668    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:20.439214    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:20.439231    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:20.439237    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:20.439241    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:20.440737    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:20.440746    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:20.440750    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:20.440753    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:20.440757    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:20 GMT
	I0425 12:24:20.440760    5272 round_trippers.go:580]     Audit-Id: b27a9c03-b284-4618-90ca-1ef5cfe1f6f2
	I0425 12:24:20.440765    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:20.440768    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:20.441038    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:20.939428    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:20.939475    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:20.939485    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:20.939492    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:20.941312    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:20.941325    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:20.941344    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:20.941350    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:20.941353    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:21 GMT
	I0425 12:24:20.941358    5272 round_trippers.go:580]     Audit-Id: ca28dfe6-2028-4178-8987-443e0bae9afe
	I0425 12:24:20.941381    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:20.941384    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:20.941445    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:21.439635    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:21.439649    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:21.439655    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:21.439660    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:21.441296    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:21.441306    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:21.441311    5272 round_trippers.go:580]     Audit-Id: 91ee21e4-e780-4bac-8fd3-b481b678b65e
	I0425 12:24:21.441317    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:21.441321    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:21.441325    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:21.441329    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:21.441333    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:21 GMT
	I0425 12:24:21.441410    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:21.939531    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:21.939563    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:21.939585    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:21.939591    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:21.941550    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:21.941565    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:21.941572    5272 round_trippers.go:580]     Audit-Id: 68517fae-4987-4197-86a7-e085335cc5f6
	I0425 12:24:21.941578    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:21.941581    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:21.941585    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:21.941610    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:21.941618    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:22 GMT
	I0425 12:24:21.941798    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:21.941980    5272 node_ready.go:53] node "multinode-034000" has status "Ready":"False"
	I0425 12:24:22.439316    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:22.439346    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:22.439352    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:22.439357    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:22.440846    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:22.440856    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:22.440862    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:22.440865    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:22 GMT
	I0425 12:24:22.440867    5272 round_trippers.go:580]     Audit-Id: 69345bbc-763a-45f7-8d82-b1ba4602d86d
	I0425 12:24:22.440872    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:22.440876    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:22.440881    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:22.441056    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:22.940393    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:22.940422    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:22.940433    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:22.940439    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:22.943211    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:22.943227    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:22.943233    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:22.943238    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:22.943243    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:23 GMT
	I0425 12:24:22.943247    5272 round_trippers.go:580]     Audit-Id: 30b66637-6696-488e-aa8f-e846dbc66246
	I0425 12:24:22.943272    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:22.943279    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:22.943396    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:23.440287    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:23.440309    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:23.440316    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:23.440320    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:23.442154    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:23.442169    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:23.442176    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:23.442181    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:23 GMT
	I0425 12:24:23.442185    5272 round_trippers.go:580]     Audit-Id: b85438f5-36e4-4f42-9a29-f2ad4d383c1d
	I0425 12:24:23.442188    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:23.442193    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:23.442197    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:23.442296    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:23.940653    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:23.940669    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:23.940675    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:23.940679    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:23.942356    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:23.942368    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:23.942374    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:23.942378    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:23.942382    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:23.942396    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:24 GMT
	I0425 12:24:23.942401    5272 round_trippers.go:580]     Audit-Id: b14fbade-d9f9-4c55-a822-89f4840931d1
	I0425 12:24:23.942404    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:23.942622    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:23.942812    5272 node_ready.go:53] node "multinode-034000" has status "Ready":"False"
	I0425 12:24:24.440536    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:24.440551    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:24.440557    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:24.440561    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:24.442112    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:24.442121    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:24.442126    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:24 GMT
	I0425 12:24:24.442130    5272 round_trippers.go:580]     Audit-Id: f84bcdd7-51e8-4b7a-baf2-29a2e147182e
	I0425 12:24:24.442134    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:24.442153    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:24.442158    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:24.442162    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:24.442329    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:24.940965    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:24.941034    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:24.941045    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:24.941052    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:24.943455    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:24.943472    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:24.943495    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:24.943501    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:25 GMT
	I0425 12:24:24.943504    5272 round_trippers.go:580]     Audit-Id: 333e45e6-5a0b-4827-9f6f-477454c3b52b
	I0425 12:24:24.943510    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:24.943517    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:24.943520    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:24.943698    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:25.440682    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:25.440699    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:25.440706    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:25.440710    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:25.444176    5272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:24:25.444189    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:25.444194    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:25.444216    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:25.444227    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:25.444231    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:25.444237    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:25 GMT
	I0425 12:24:25.444241    5272 round_trippers.go:580]     Audit-Id: 29ae3113-2703-4e07-be32-97e685589855
	I0425 12:24:25.444494    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"315","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0425 12:24:25.940080    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:25.940134    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:25.940148    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:25.940154    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:25.942506    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:25.942519    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:25.942527    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:25.942534    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:25.942540    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:25.942545    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:26 GMT
	I0425 12:24:25.942549    5272 round_trippers.go:580]     Audit-Id: 87989696-c350-4f92-8917-84a9d53d72d1
	I0425 12:24:25.942556    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:25.943019    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:25.943261    5272 node_ready.go:49] node "multinode-034000" has status "Ready":"True"
	I0425 12:24:25.943277    5272 node_ready.go:38] duration metric: took 8.504024183s for node "multinode-034000" to be "Ready" ...
	I0425 12:24:25.943286    5272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:24:25.943344    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:24:25.943352    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:25.943360    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:25.943365    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:25.945452    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:25.945459    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:25.945464    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:25.945468    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:25.945471    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:25.945473    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:26 GMT
	I0425 12:24:25.945475    5272 round_trippers.go:580]     Audit-Id: 9e9deb8e-cb2f-4a4c-940e-1fc7232a46a3
	I0425 12:24:25.945485    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:25.946023    5272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"398"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"397","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0425 12:24:25.948358    5272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:25.948408    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:24:25.948414    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:25.948419    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:25.948423    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:25.949512    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:25.949519    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:25.949523    5272 round_trippers.go:580]     Audit-Id: 4b611fc1-8c23-41de-bf0a-2f25d94ac4b1
	I0425 12:24:25.949527    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:25.949539    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:25.949541    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:25.949544    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:25.949546    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:26 GMT
	I0425 12:24:25.949727    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"397","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0425 12:24:25.949968    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:25.949975    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:25.949980    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:25.949983    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:25.950877    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:24:25.950884    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:25.950888    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:25.950890    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:26 GMT
	I0425 12:24:25.950893    5272 round_trippers.go:580]     Audit-Id: e248e489-30a3-4efe-a801-f0a6ddb090ff
	I0425 12:24:25.950897    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:25.950900    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:25.950903    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:25.951093    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:26.448583    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:24:26.448599    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:26.448605    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:26.448623    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:26.455778    5272 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0425 12:24:26.455791    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:26.455796    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:26.455800    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:26.455802    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:26.455806    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:26.455809    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:26 GMT
	I0425 12:24:26.455812    5272 round_trippers.go:580]     Audit-Id: 11fa9c42-f183-4c68-b16c-0ab97726eb6d
	I0425 12:24:26.457570    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"397","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0425 12:24:26.457884    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:26.457892    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:26.457899    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:26.457904    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:26.462219    5272 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 12:24:26.462230    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:26.462235    5272 round_trippers.go:580]     Audit-Id: 98e9ecbd-c17d-47a5-8678-ae7b6fc64ba1
	I0425 12:24:26.462238    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:26.462241    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:26.462244    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:26.462248    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:26.462252    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:26 GMT
	I0425 12:24:26.462338    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:26.948762    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:24:26.948778    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:26.948784    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:26.948789    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:26.950978    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:26.950986    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:26.950990    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:27 GMT
	I0425 12:24:26.950993    5272 round_trippers.go:580]     Audit-Id: 1b7f1fd4-89d0-4fbe-9f39-1540a922e766
	I0425 12:24:26.950998    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:26.951003    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:26.951006    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:26.951020    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:26.951355    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"397","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0425 12:24:26.951651    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:26.951658    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:26.951663    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:26.951667    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:26.952890    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:26.952899    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:26.952903    5272 round_trippers.go:580]     Audit-Id: 8c4b7c75-2dc4-4e03-8a71-be9d8f7d488d
	I0425 12:24:26.952906    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:26.952909    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:26.952912    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:26.952915    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:26.952920    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:27 GMT
	I0425 12:24:26.953169    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.449045    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:24:27.449064    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.449077    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.449081    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.451930    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:27.451943    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.451950    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.451954    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.451957    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.451961    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:27 GMT
	I0425 12:24:27.451964    5272 round_trippers.go:580]     Audit-Id: a6535320-e54f-4aa9-a610-b3ed001bcfea
	I0425 12:24:27.451966    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.452055    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"397","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0425 12:24:27.452441    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:27.452451    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.452459    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.452463    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.454080    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:27.454092    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.454097    5272 round_trippers.go:580]     Audit-Id: d80cbebb-175d-4202-8d9c-697f85622c33
	I0425 12:24:27.454101    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.454105    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.454109    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.454112    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.454115    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:27 GMT
	I0425 12:24:27.454203    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.949852    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:24:27.949903    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.949916    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.949924    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.952592    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:27.952617    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.952626    5272 round_trippers.go:580]     Audit-Id: 89cdbbe3-7985-4e47-928b-b889b92c608c
	I0425 12:24:27.952632    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.952639    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.952642    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.952645    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.952648    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.952740    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"411","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0425 12:24:27.953096    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:27.953106    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.953114    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.953120    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.954513    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:27.954520    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.954525    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.954527    5272 round_trippers.go:580]     Audit-Id: 6136892d-f445-4f56-a89e-53e1ac498400
	I0425 12:24:27.954531    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.954533    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.954536    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.954540    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.954744    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.954900    5272 pod_ready.go:92] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"True"
	I0425 12:24:27.954908    5272 pod_ready.go:81] duration metric: took 2.006479093s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.954916    5272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.954948    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:24:27.954952    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.954957    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.954961    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.956050    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:27.956058    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.956062    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.956066    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.956070    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.956074    5272 round_trippers.go:580]     Audit-Id: b38d9cfb-ace9-4197-abdf-da0674816b44
	I0425 12:24:27.956081    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.956084    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.956148    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"295","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0425 12:24:27.956366    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:27.956374    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.956380    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.956385    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.957419    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:27.957429    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.957437    5272 round_trippers.go:580]     Audit-Id: d27ef53b-bcca-417d-bd9f-7c63e35abb04
	I0425 12:24:27.957441    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.957444    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.957448    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.957450    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.957453    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.957593    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.957774    5272 pod_ready.go:92] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:24:27.957781    5272 pod_ready.go:81] duration metric: took 2.859396ms for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.957790    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.957823    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-034000
	I0425 12:24:27.957828    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.957833    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.957836    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.958779    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:24:27.958785    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.958790    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.958793    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.958807    5272 round_trippers.go:580]     Audit-Id: 5b02871c-29fd-44e6-981e-e1f960a4ffcf
	I0425 12:24:27.958813    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.958817    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.958823    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.958919    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-034000","namespace":"kube-system","uid":"d142ad34-9a12-42f9-b92d-e0f968eaaa14","resourceVersion":"278","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.mirror":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.seen":"2024-04-25T19:24:03.349967563Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0425 12:24:27.959163    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:27.959170    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.959176    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.959180    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.960014    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:24:27.960021    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.960032    5272 round_trippers.go:580]     Audit-Id: 56ada061-ae6d-4218-8aa1-512a81d8f556
	I0425 12:24:27.960037    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.960041    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.960045    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.960049    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.960054    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.960177    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.960333    5272 pod_ready.go:92] pod "kube-apiserver-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:24:27.960341    5272 pod_ready.go:81] duration metric: took 2.545479ms for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.960347    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.960374    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-034000
	I0425 12:24:27.960378    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.960383    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.960387    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.961361    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:24:27.961368    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.961375    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.961381    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.961385    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.961390    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.961394    5272 round_trippers.go:580]     Audit-Id: ff1b02d8-55eb-4a22-a28b-6b22f51fd46d
	I0425 12:24:27.961397    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.961511    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-034000","namespace":"kube-system","uid":"19072fbe-3cb2-4b92-bd98-b549daec4cf2","resourceVersion":"305","creationTimestamp":"2024-04-25T19:24:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.mirror":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.seen":"2024-04-25T19:23:58.495195502Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0425 12:24:27.961747    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:27.961754    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.961759    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.961763    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.964986    5272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:24:27.964996    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.965002    5272 round_trippers.go:580]     Audit-Id: c251b89f-cf70-4a04-bd75-57af54affce1
	I0425 12:24:27.965006    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.965009    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.965012    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.965016    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.965018    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.965087    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.965257    5272 pod_ready.go:92] pod "kube-controller-manager-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:24:27.965264    5272 pod_ready.go:81] duration metric: took 4.912105ms for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.965269    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.965302    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:24:27.965307    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.965312    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.965316    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.966671    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:27.966678    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.966682    5272 round_trippers.go:580]     Audit-Id: 26691df4-bd14-4de9-8b13-051141918298
	I0425 12:24:27.966699    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.966705    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.966713    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.966718    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.966720    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.966847    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmspl","generateName":"kube-proxy-","namespace":"kube-system","uid":"b0f6c7c8-ef54-4c63-9de2-05e01ace3e15","resourceVersion":"380","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0425 12:24:27.967100    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:27.967108    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:27.967113    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:27.967117    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:27.968067    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:24:27.968079    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:27.968086    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:27.968090    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:27.968094    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:27.968097    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:27.968100    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:27.968102    5272 round_trippers.go:580]     Audit-Id: 7c0390fa-e539-4485-85e6-978ebf55f538
	I0425 12:24:27.968201    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:27.968364    5272 pod_ready.go:92] pod "kube-proxy-gmspl" in "kube-system" namespace has status "Ready":"True"
	I0425 12:24:27.968371    5272 pod_ready.go:81] duration metric: took 3.097683ms for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:27.968379    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:28.151726    5272 request.go:629] Waited for 183.3029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:24:28.151779    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:24:28.151852    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:28.151878    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:28.151882    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:28.153758    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:28.153774    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:28.153782    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:28.153787    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:28.153791    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:28.153794    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:28.153798    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:28.153801    5272 round_trippers.go:580]     Audit-Id: 7fc598cf-fbd0-434a-8672-21ab717a1897
	I0425 12:24:28.153908    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-034000","namespace":"kube-system","uid":"889fb9d4-d8d9-4a92-be22-d0ab1518bc93","resourceVersion":"274","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.mirror":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.seen":"2024-04-25T19:24:03.349969029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0425 12:24:28.350118    5272 request.go:629] Waited for 195.948233ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:28.350215    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:24:28.350230    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:28.350244    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:28.350252    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:28.352603    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:28.352619    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:28.352626    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:28.352634    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:28.352638    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:28.352642    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:28.352648    5272 round_trippers.go:580]     Audit-Id: 013021d5-a7f5-49e9-b9f7-a7dc79942656
	I0425 12:24:28.352652    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:28.352902    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0425 12:24:28.353143    5272 pod_ready.go:92] pod "kube-scheduler-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:24:28.353154    5272 pod_ready.go:81] duration metric: took 384.757673ms for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:24:28.353163    5272 pod_ready.go:38] duration metric: took 2.409790566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:24:28.353186    5272 api_server.go:52] waiting for apiserver process to appear ...
	I0425 12:24:28.353243    5272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:24:28.365889    5272 command_runner.go:130] > 1869
	I0425 12:24:28.366005    5272 api_server.go:72] duration metric: took 11.399450302s to wait for apiserver process to appear ...
	I0425 12:24:28.366021    5272 api_server.go:88] waiting for apiserver healthz status ...
	I0425 12:24:28.366044    5272 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:24:28.369904    5272 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:24:28.369941    5272 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0425 12:24:28.369946    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:28.369952    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:28.369956    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:28.370406    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:24:28.370413    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:28.370417    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:28.370421    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:28.370424    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:28.370439    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:28.370445    5272 round_trippers.go:580]     Content-Length: 263
	I0425 12:24:28.370450    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:28.370453    5272 round_trippers.go:580]     Audit-Id: e5e843ab-030e-4dd2-9f6c-24da3a046a76
	I0425 12:24:28.370461    5272 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0425 12:24:28.370503    5272 api_server.go:141] control plane version: v1.30.0
	I0425 12:24:28.370515    5272 api_server.go:131] duration metric: took 4.488442ms to wait for apiserver health ...
	I0425 12:24:28.370520    5272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 12:24:28.550103    5272 request.go:629] Waited for 179.514162ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:24:28.550148    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:24:28.550159    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:28.550176    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:28.550183    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:28.553397    5272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:24:28.553418    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:28.553429    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:28.553436    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:28.553442    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:28.553449    5272 round_trippers.go:580]     Audit-Id: ffbf4a89-79c3-4618-9ada-33ef55e9135e
	I0425 12:24:28.553457    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:28.553462    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:28.554047    5272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"411","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0425 12:24:28.555343    5272 system_pods.go:59] 8 kube-system pods found
	I0425 12:24:28.555359    5272 system_pods.go:61] "coredns-7db6d8ff4d-w5z5l" [21ddb5bc-fcf1-4ec4-9fdb-8595d406b302] Running
	I0425 12:24:28.555365    5272 system_pods.go:61] "etcd-multinode-034000" [fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5] Running
	I0425 12:24:28.555368    5272 system_pods.go:61] "kindnet-7ktv2" [957b7d0e-0754-481e-aa73-6772434e58e3] Running
	I0425 12:24:28.555371    5272 system_pods.go:61] "kube-apiserver-multinode-034000" [d142ad34-9a12-42f9-b92d-e0f968eaaa14] Running
	I0425 12:24:28.555375    5272 system_pods.go:61] "kube-controller-manager-multinode-034000" [19072fbe-3cb2-4b92-bd98-b549daec4cf2] Running
	I0425 12:24:28.555377    5272 system_pods.go:61] "kube-proxy-gmspl" [b0f6c7c8-ef54-4c63-9de2-05e01ace3e15] Running
	I0425 12:24:28.555380    5272 system_pods.go:61] "kube-scheduler-multinode-034000" [889fb9d4-d8d9-4a92-be22-d0ab1518bc93] Running
	I0425 12:24:28.555383    5272 system_pods.go:61] "storage-provisioner" [89c78c52-dabe-4a5b-ac3b-0209ccb11139] Running
	I0425 12:24:28.555387    5272 system_pods.go:74] duration metric: took 184.85879ms to wait for pod list to return data ...
	I0425 12:24:28.555393    5272 default_sa.go:34] waiting for default service account to be created ...
	I0425 12:24:28.750214    5272 request.go:629] Waited for 194.760315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0425 12:24:28.750282    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0425 12:24:28.750286    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:28.750292    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:28.750295    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:28.751964    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:28.751972    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:28.751977    5272 round_trippers.go:580]     Content-Length: 261
	I0425 12:24:28.751980    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:28 GMT
	I0425 12:24:28.751982    5272 round_trippers.go:580]     Audit-Id: 807f816e-b056-44bf-9af3-55ed31c769c5
	I0425 12:24:28.751985    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:28.751988    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:28.751991    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:28.751994    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:28.752004    5272 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a557ca19-d109-4b3a-9af5-e9a633494e34","resourceVersion":"311","creationTimestamp":"2024-04-25T19:24:16Z"}}]}
	I0425 12:24:28.752132    5272 default_sa.go:45] found service account: "default"
	I0425 12:24:28.752142    5272 default_sa.go:55] duration metric: took 196.738888ms for default service account to be created ...
	I0425 12:24:28.752147    5272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 12:24:28.950355    5272 request.go:629] Waited for 198.137954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:24:28.950462    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:24:28.950471    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:28.950479    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:28.950486    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:28.953965    5272 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:24:28.953984    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:28.953994    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:29 GMT
	I0425 12:24:28.954003    5272 round_trippers.go:580]     Audit-Id: 686e5a0e-3177-4a6f-80f9-5ac38668b362
	I0425 12:24:28.954010    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:28.954017    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:28.954022    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:28.954029    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:28.954783    5272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"411","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0425 12:24:28.956062    5272 system_pods.go:86] 8 kube-system pods found
	I0425 12:24:28.956072    5272 system_pods.go:89] "coredns-7db6d8ff4d-w5z5l" [21ddb5bc-fcf1-4ec4-9fdb-8595d406b302] Running
	I0425 12:24:28.956076    5272 system_pods.go:89] "etcd-multinode-034000" [fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5] Running
	I0425 12:24:28.956080    5272 system_pods.go:89] "kindnet-7ktv2" [957b7d0e-0754-481e-aa73-6772434e58e3] Running
	I0425 12:24:28.956083    5272 system_pods.go:89] "kube-apiserver-multinode-034000" [d142ad34-9a12-42f9-b92d-e0f968eaaa14] Running
	I0425 12:24:28.956087    5272 system_pods.go:89] "kube-controller-manager-multinode-034000" [19072fbe-3cb2-4b92-bd98-b549daec4cf2] Running
	I0425 12:24:28.956090    5272 system_pods.go:89] "kube-proxy-gmspl" [b0f6c7c8-ef54-4c63-9de2-05e01ace3e15] Running
	I0425 12:24:28.956094    5272 system_pods.go:89] "kube-scheduler-multinode-034000" [889fb9d4-d8d9-4a92-be22-d0ab1518bc93] Running
	I0425 12:24:28.956099    5272 system_pods.go:89] "storage-provisioner" [89c78c52-dabe-4a5b-ac3b-0209ccb11139] Running
	I0425 12:24:28.956104    5272 system_pods.go:126] duration metric: took 203.947419ms to wait for k8s-apps to be running ...
	I0425 12:24:28.956109    5272 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 12:24:28.956155    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:24:28.967567    5272 system_svc.go:56] duration metric: took 11.454147ms WaitForService to wait for kubelet
	I0425 12:24:28.967584    5272 kubeadm.go:576] duration metric: took 12.001009115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:24:28.967595    5272 node_conditions.go:102] verifying NodePressure condition ...
	I0425 12:24:29.150216    5272 request.go:629] Waited for 182.556039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0425 12:24:29.150284    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0425 12:24:29.150292    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:29.150304    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:29.150343    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:29.152957    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:29.152972    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:29.152979    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:29 GMT
	I0425 12:24:29.152983    5272 round_trippers.go:580]     Audit-Id: a56198d8-d6b7-4c1e-8cf4-0d8f8f6ad7e4
	I0425 12:24:29.152986    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:29.152989    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:29.152993    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:29.152996    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:29.153159    5272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"392","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0425 12:24:29.153440    5272 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:24:29.153458    5272 node_conditions.go:123] node cpu capacity is 2
	I0425 12:24:29.153470    5272 node_conditions.go:105] duration metric: took 185.865108ms to run NodePressure ...
	I0425 12:24:29.153480    5272 start.go:240] waiting for startup goroutines ...
	I0425 12:24:29.153487    5272 start.go:245] waiting for cluster config update ...
	I0425 12:24:29.153502    5272 start.go:254] writing updated cluster config ...
	I0425 12:24:29.177385    5272 out.go:177] 
	I0425 12:24:29.199148    5272 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:24:29.199236    5272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:24:29.220942    5272 out.go:177] * Starting "multinode-034000-m02" worker node in "multinode-034000" cluster
	I0425 12:24:29.263803    5272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:24:29.263853    5272 cache.go:56] Caching tarball of preloaded images
	I0425 12:24:29.264036    5272 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:24:29.264053    5272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:24:29.264149    5272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:24:29.265210    5272 start.go:360] acquireMachinesLock for multinode-034000-m02: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:24:29.265325    5272 start.go:364] duration metric: took 91.177µs to acquireMachinesLock for "multinode-034000-m02"
	I0425 12:24:29.265358    5272 start.go:93] Provisioning new machine with config: &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:24:29.265460    5272 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0425 12:24:29.287067    5272 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 12:24:29.287216    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:29.287246    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:29.297014    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52373
	I0425 12:24:29.297347    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:29.297656    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:29.297665    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:29.297893    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:29.298001    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:24:29.298106    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:29.298202    5272 start.go:159] libmachine.API.Create for "multinode-034000" (driver="hyperkit")
	I0425 12:24:29.298216    5272 client.go:168] LocalClient.Create starting
	I0425 12:24:29.298245    5272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem
	I0425 12:24:29.298287    5272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:24:29.298297    5272 main.go:141] libmachine: Parsing certificate...
	I0425 12:24:29.298340    5272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem
	I0425 12:24:29.298375    5272 main.go:141] libmachine: Decoding PEM data...
	I0425 12:24:29.298385    5272 main.go:141] libmachine: Parsing certificate...
	I0425 12:24:29.298398    5272 main.go:141] libmachine: Running pre-create checks...
	I0425 12:24:29.298403    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .PreCreateCheck
	I0425 12:24:29.298475    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:29.298515    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetConfigRaw
	I0425 12:24:29.308293    5272 main.go:141] libmachine: Creating machine...
	I0425 12:24:29.308354    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .Create
	I0425 12:24:29.308495    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:29.308724    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | I0425 12:24:29.308486    5308 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 12:24:29.308827    5272 main.go:141] libmachine: (multinode-034000-m02) Downloading /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18757-1425/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 12:24:29.497630    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | I0425 12:24:29.497563    5308 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa...
	I0425 12:24:29.544307    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | I0425 12:24:29.544229    5308 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/multinode-034000-m02.rawdisk...
	I0425 12:24:29.544322    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Writing magic tar header
	I0425 12:24:29.544333    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Writing SSH key tar header
	I0425 12:24:29.544791    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | I0425 12:24:29.544718    5308 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02 ...
	I0425 12:24:29.898458    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:29.898480    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid
	I0425 12:24:29.898515    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Using UUID 94b40896-ddd7-48d5-b8c4-70380b6d3376
	I0425 12:24:29.925938    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Generated MAC 46:26:de:d7:8e:2e
	I0425 12:24:29.925965    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:24:29.925999    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94b40896-ddd7-48d5-b8c4-70380b6d3376", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:24:29.926031    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94b40896-ddd7-48d5-b8c4-70380b6d3376", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:24:29.926076    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "94b40896-ddd7-48d5-b8c4-70380b6d3376", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/multinode-034000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage,/Users/j
enkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:24:29.926112    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 94b40896-ddd7-48d5-b8c4-70380b6d3376 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/multinode-034000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/mult
inode-034000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:24:29.926134    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:24:29.928997    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 DEBUG: hyperkit: Pid is 5309
	I0425 12:24:29.929440    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 0
	I0425 12:24:29.929454    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:29.929504    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:29.930384    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:24:29.930459    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0425 12:24:29.930477    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:24:29.930503    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:24:29.930523    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:24:29.930552    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:24:29.930570    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:24:29.930581    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:24:29.930596    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:24:29.930619    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:24:29.930636    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:24:29.930654    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:24:29.930664    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:24:29.930671    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:24:29.930678    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:24:29.930686    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:24:29.930694    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:24:29.936508    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:24:29.944639    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:24:29.945531    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:24:29.945556    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:24:29.945566    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:24:29.945580    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:24:30.330631    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:24:30.330650    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:24:30.445472    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:24:30.445491    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:24:30.445501    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:24:30.445514    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:24:30.446336    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:24:30.446345    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:24:31.932478    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 1
	I0425 12:24:31.932494    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:31.932590    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:31.933431    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:24:31.933519    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0425 12:24:31.933539    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:24:31.933550    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:24:31.933560    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:24:31.933577    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:24:31.933608    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:24:31.933635    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:24:31.933659    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:24:31.933667    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:24:31.933687    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:24:31.933697    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:24:31.933732    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:24:31.933740    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:24:31.933766    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:24:31.933775    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:24:31.933783    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:24:33.934026    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 2
	I0425 12:24:33.934043    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:33.934134    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:33.934998    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:24:33.935051    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0425 12:24:33.935061    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:24:33.935086    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:24:33.935116    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:24:33.935165    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:24:33.935175    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:24:33.935194    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:24:33.935207    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:24:33.935219    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:24:33.935234    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:24:33.935260    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:24:33.935268    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:24:33.935275    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:24:33.935310    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:24:33.935317    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:24:33.935324    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:24:35.733352    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0425 12:24:35.733395    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0425 12:24:35.733402    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0425 12:24:35.756116    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:24:35 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0425 12:24:35.935469    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 3
	I0425 12:24:35.935515    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:35.935651    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:35.937104    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:24:35.937173    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0425 12:24:35.937207    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:24:35.937256    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:24:35.937281    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:24:35.937295    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:24:35.937312    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:24:35.937325    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:24:35.937340    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:24:35.937363    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:24:35.937377    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:24:35.937391    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:24:35.937407    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:24:35.937421    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:24:35.937436    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:24:35.937460    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:24:35.937487    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:24:37.938646    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 4
	I0425 12:24:37.938672    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:37.938761    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:37.939530    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:24:37.939587    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0425 12:24:37.939597    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:24:37.939606    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9a:5b:b6:68:c6:7f ID:1,9a:5b:b6:68:c6:7f Lease:0x662aadab}
	I0425 12:24:37.939612    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2a:2:66:32:5b:d7 ID:1,2a:2:66:32:5b:d7 Lease:0x662aad93}
	I0425 12:24:37.939619    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:46:90:90:e0:60:8c ID:1,46:90:90:e0:60:8c Lease:0x662aad66}
	I0425 12:24:37.939624    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e:cc:fd:cc:1e:71 ID:1,e:cc:fd:cc:1e:71 Lease:0x662bfe2a}
	I0425 12:24:37.939631    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:9a:35:70:19:5d ID:1,a:9a:35:70:19:5d Lease:0x662bfdeb}
	I0425 12:24:37.939637    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ca:c3:43:6b:33:f8 ID:1,ca:c3:43:6b:33:f8 Lease:0x662bfd9a}
	I0425 12:24:37.939644    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:0:2a:a2:aa:73 ID:1,3e:0:2a:a2:aa:73 Lease:0x662bfb6d}
	I0425 12:24:37.939652    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:9e:4a:3:d7:af:68 ID:1,9e:4a:3:d7:af:68 Lease:0x662aa7d2}
	I0425 12:24:37.939671    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:2a:72:c0:46:70:5e ID:1,2a:72:c0:46:70:5e Lease:0x662bfb4d}
	I0425 12:24:37.939684    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7e:6e:5b:5f:88:ce ID:1,7e:6e:5b:5f:88:ce Lease:0x662bfb3b}
	I0425 12:24:37.939691    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1e:23:55:47:3b:5d ID:1,1e:23:55:47:3b:5d Lease:0x662bf4e1}
	I0425 12:24:37.939699    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:60:f8:40:a2:21 ID:1,ea:60:f8:40:a2:21 Lease:0x662bf41c}
	I0425 12:24:37.939719    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:39:86:1c:90:f7 ID:1,da:39:86:1c:90:f7 Lease:0x662bf2fa}
	I0425 12:24:37.939728    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x662bf2b4}
	I0425 12:24:39.940365    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 5
	I0425 12:24:39.940399    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:39.940528    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:39.941944    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:24:39.942006    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 16 entries in /var/db/dhcpd_leases!
	I0425 12:24:39.942022    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:46:26:de:d7:8e:2e ID:1,46:26:de:d7:8e:2e Lease:0x662bff76}
	I0425 12:24:39.942066    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | Found match: 46:26:de:d7:8e:2e
	I0425 12:24:39.942084    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | IP: 192.169.0.17
	I0425 12:24:39.942116    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetConfigRaw
	I0425 12:24:39.942889    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:39.943032    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:39.943183    5272 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 12:24:39.943194    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:24:39.943300    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:39.943376    5272 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:24:39.944324    5272 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 12:24:39.944330    5272 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 12:24:39.944342    5272 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 12:24:39.944347    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:39.944433    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:39.944516    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:39.944609    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:39.944692    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:39.944813    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:39.945007    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:39.945015    5272 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 12:24:39.962697    5272 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0425 12:24:43.017845    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:24:43.017857    5272 main.go:141] libmachine: Detecting the provisioner...
	I0425 12:24:43.017863    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.018000    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.018095    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.018196    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.018294    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.018432    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:43.018564    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:43.018572    5272 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 12:24:43.071392    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 12:24:43.071431    5272 main.go:141] libmachine: found compatible host: buildroot
	I0425 12:24:43.071442    5272 main.go:141] libmachine: Provisioning with buildroot...
	I0425 12:24:43.071448    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:24:43.071575    5272 buildroot.go:166] provisioning hostname "multinode-034000-m02"
	I0425 12:24:43.071587    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:24:43.071682    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.071773    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.071853    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.071941    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.072031    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.072154    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:43.072300    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:43.072309    5272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000-m02 && echo "multinode-034000-m02" | sudo tee /etc/hostname
	I0425 12:24:43.136242    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000-m02
	
	I0425 12:24:43.136262    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.136401    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.136504    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.136587    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.136675    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.136793    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:43.136955    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:43.136967    5272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:24:43.198286    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:24:43.198305    5272 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:24:43.198319    5272 buildroot.go:174] setting up certificates
	I0425 12:24:43.198325    5272 provision.go:84] configureAuth start
	I0425 12:24:43.198332    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:24:43.198461    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:24:43.198549    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.198624    5272 provision.go:143] copyHostCerts
	I0425 12:24:43.198656    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:24:43.198718    5272 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:24:43.198724    5272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:24:43.198866    5272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:24:43.199059    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:24:43.199088    5272 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:24:43.199093    5272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:24:43.199163    5272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:24:43.199307    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:24:43.199335    5272 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:24:43.199339    5272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:24:43.199403    5272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:24:43.199575    5272 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000-m02 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-034000-m02]
	I0425 12:24:43.232293    5272 provision.go:177] copyRemoteCerts
	I0425 12:24:43.232335    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:24:43.232347    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.232464    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.232562    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.232687    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.232812    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:24:43.265818    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:24:43.265893    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:24:43.286730    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:24:43.286804    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0425 12:24:43.307005    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:24:43.307084    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 12:24:43.327531    5272 provision.go:87] duration metric: took 129.194217ms to configureAuth
	I0425 12:24:43.327547    5272 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:24:43.327702    5272 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:24:43.327715    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:43.327855    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.327960    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.328043    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.328132    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.328224    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.328340    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:43.328466    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:43.328474    5272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:24:43.382990    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:24:43.383004    5272 buildroot.go:70] root file system type: tmpfs
	I0425 12:24:43.383083    5272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:24:43.383095    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.383238    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.383343    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.383446    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.383540    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.383680    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:43.383814    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:43.383858    5272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:24:43.448562    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:24:43.448579    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:43.448723    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:43.448815    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.448905    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:43.448990    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:43.449115    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:43.449258    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:43.449270    5272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:24:44.960916    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:24:44.960933    5272 main.go:141] libmachine: Checking connection to Docker...
	I0425 12:24:44.960939    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetURL
	I0425 12:24:44.961079    5272 main.go:141] libmachine: Docker is up and running!
	I0425 12:24:44.961087    5272 main.go:141] libmachine: Reticulating splines...
	I0425 12:24:44.961092    5272 client.go:171] duration metric: took 15.662400768s to LocalClient.Create
	I0425 12:24:44.961104    5272 start.go:167] duration metric: took 15.662434048s to libmachine.API.Create "multinode-034000"
	I0425 12:24:44.961110    5272 start.go:293] postStartSetup for "multinode-034000-m02" (driver="hyperkit")
	I0425 12:24:44.961116    5272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:24:44.961125    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:44.961272    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:24:44.961284    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:44.961392    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:44.961498    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:44.961594    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:44.961706    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:24:45.001673    5272 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:24:45.005889    5272 command_runner.go:130] > NAME=Buildroot
	I0425 12:24:45.005911    5272 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 12:24:45.005915    5272 command_runner.go:130] > ID=buildroot
	I0425 12:24:45.005920    5272 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 12:24:45.005924    5272 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 12:24:45.006150    5272 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:24:45.006160    5272 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:24:45.006251    5272 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:24:45.006397    5272 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:24:45.006404    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:24:45.006571    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:24:45.016391    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:24:45.046286    5272 start.go:296] duration metric: took 85.16594ms for postStartSetup
	I0425 12:24:45.046311    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetConfigRaw
	I0425 12:24:45.059339    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:24:45.059559    5272 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:24:45.059892    5272 start.go:128] duration metric: took 15.793947381s to createHost
	I0425 12:24:45.059909    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:45.060005    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:45.060097    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:45.060177    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:45.060261    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:45.060372    5272 main.go:141] libmachine: Using SSH client type: native
	I0425 12:24:45.060504    5272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdeceb80] 0xded18e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:24:45.060511    5272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 12:24:45.112795    5272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073085.218743552
	
	I0425 12:24:45.112806    5272 fix.go:216] guest clock: 1714073085.218743552
	I0425 12:24:45.112811    5272 fix.go:229] Guest: 2024-04-25 12:24:45.218743552 -0700 PDT Remote: 2024-04-25 12:24:45.059902 -0700 PDT m=+79.495959790 (delta=158.841552ms)
	I0425 12:24:45.112822    5272 fix.go:200] guest clock delta is within tolerance: 158.841552ms
	I0425 12:24:45.112831    5272 start.go:83] releasing machines lock for "multinode-034000-m02", held for 15.847019546s
	I0425 12:24:45.112850    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:45.112978    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:24:45.134909    5272 out.go:177] * Found network options:
	I0425 12:24:45.180929    5272 out.go:177]   - NO_PROXY=192.169.0.16
	W0425 12:24:45.201871    5272 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 12:24:45.201907    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:45.202395    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:45.202555    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:24:45.202637    5272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:24:45.202678    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	W0425 12:24:45.202720    5272 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 12:24:45.202811    5272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 12:24:45.202814    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:45.202828    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:24:45.202936    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:24:45.202984    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:45.203085    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:45.203097    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:24:45.203203    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:24:45.203261    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:24:45.203336    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:24:45.235236    5272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 12:24:45.235351    5272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:24:45.235408    5272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:24:45.368629    5272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 12:24:45.368770    5272 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0425 12:24:45.368783    5272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:24:45.368789    5272 start.go:494] detecting cgroup driver to use...
	I0425 12:24:45.368865    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:24:45.384552    5272 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0425 12:24:45.384845    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:24:45.393370    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:24:45.401772    5272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:24:45.401826    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:24:45.410021    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:24:45.418331    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:24:45.426638    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:24:45.434885    5272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:24:45.443609    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:24:45.452170    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:24:45.461887    5272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:24:45.470521    5272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:24:45.477941    5272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 12:24:45.478053    5272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:24:45.485588    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:45.588780    5272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:24:45.609666    5272 start.go:494] detecting cgroup driver to use...
	I0425 12:24:45.628302    5272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:24:45.645024    5272 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0425 12:24:45.645262    5272 command_runner.go:130] > [Unit]
	I0425 12:24:45.645277    5272 command_runner.go:130] > Description=Docker Application Container Engine
	I0425 12:24:45.645298    5272 command_runner.go:130] > Documentation=https://docs.docker.com
	I0425 12:24:45.645318    5272 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0425 12:24:45.645322    5272 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0425 12:24:45.645326    5272 command_runner.go:130] > StartLimitBurst=3
	I0425 12:24:45.645330    5272 command_runner.go:130] > StartLimitIntervalSec=60
	I0425 12:24:45.645334    5272 command_runner.go:130] > [Service]
	I0425 12:24:45.645336    5272 command_runner.go:130] > Type=notify
	I0425 12:24:45.645342    5272 command_runner.go:130] > Restart=on-failure
	I0425 12:24:45.645347    5272 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0425 12:24:45.645353    5272 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0425 12:24:45.645361    5272 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0425 12:24:45.645368    5272 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0425 12:24:45.645373    5272 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0425 12:24:45.645379    5272 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0425 12:24:45.645384    5272 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0425 12:24:45.645390    5272 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0425 12:24:45.645399    5272 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0425 12:24:45.645405    5272 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0425 12:24:45.645409    5272 command_runner.go:130] > ExecStart=
	I0425 12:24:45.645420    5272 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0425 12:24:45.645427    5272 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0425 12:24:45.645432    5272 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0425 12:24:45.645438    5272 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0425 12:24:45.645442    5272 command_runner.go:130] > LimitNOFILE=infinity
	I0425 12:24:45.645445    5272 command_runner.go:130] > LimitNPROC=infinity
	I0425 12:24:45.645449    5272 command_runner.go:130] > LimitCORE=infinity
	I0425 12:24:45.645454    5272 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0425 12:24:45.645459    5272 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0425 12:24:45.645462    5272 command_runner.go:130] > TasksMax=infinity
	I0425 12:24:45.645466    5272 command_runner.go:130] > TimeoutStartSec=0
	I0425 12:24:45.645473    5272 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0425 12:24:45.645481    5272 command_runner.go:130] > Delegate=yes
	I0425 12:24:45.645486    5272 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0425 12:24:45.645496    5272 command_runner.go:130] > KillMode=process
	I0425 12:24:45.645500    5272 command_runner.go:130] > [Install]
	I0425 12:24:45.645508    5272 command_runner.go:130] > WantedBy=multi-user.target
	I0425 12:24:45.645685    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:24:45.657382    5272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:24:45.679915    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:24:45.691688    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:24:45.702135    5272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:24:45.724683    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:24:45.735317    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:24:45.750705    5272 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0425 12:24:45.751102    5272 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:24:45.754203    5272 command_runner.go:130] > /usr/bin/cri-dockerd
	I0425 12:24:45.754557    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:24:45.762150    5272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:24:45.776591    5272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:24:45.878910    5272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:24:45.994210    5272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:24:45.994239    5272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:24:46.008926    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:46.106848    5272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:24:48.374526    5272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.267587683s)
	I0425 12:24:48.374592    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0425 12:24:48.384687    5272 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0425 12:24:48.398004    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:24:48.408493    5272 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0425 12:24:48.510082    5272 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0425 12:24:48.609431    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:48.715333    5272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0425 12:24:48.729537    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:24:48.739950    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:48.850961    5272 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0425 12:24:48.909671    5272 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0425 12:24:48.910546    5272 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0425 12:24:48.914753    5272 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0425 12:24:48.914766    5272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 12:24:48.914772    5272 command_runner.go:130] > Device: 0,22	Inode: 799         Links: 1
	I0425 12:24:48.914778    5272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0425 12:24:48.914782    5272 command_runner.go:130] > Access: 2024-04-25 19:24:48.969630093 +0000
	I0425 12:24:48.914790    5272 command_runner.go:130] > Modify: 2024-04-25 19:24:48.969630093 +0000
	I0425 12:24:48.914795    5272 command_runner.go:130] > Change: 2024-04-25 19:24:48.971629655 +0000
	I0425 12:24:48.914798    5272 command_runner.go:130] >  Birth: -
	I0425 12:24:48.915044    5272 start.go:562] Will wait 60s for crictl version
	I0425 12:24:48.915091    5272 ssh_runner.go:195] Run: which crictl
	I0425 12:24:48.917776    5272 command_runner.go:130] > /usr/bin/crictl
	I0425 12:24:48.918023    5272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 12:24:48.943415    5272 command_runner.go:130] > Version:  0.1.0
	I0425 12:24:48.943428    5272 command_runner.go:130] > RuntimeName:  docker
	I0425 12:24:48.943432    5272 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0425 12:24:48.943438    5272 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 12:24:48.944518    5272 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0425 12:24:48.944603    5272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:24:48.959655    5272 command_runner.go:130] > 26.0.2
	I0425 12:24:48.960482    5272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:24:48.975703    5272 command_runner.go:130] > 26.0.2
	I0425 12:24:49.024674    5272 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0425 12:24:49.045785    5272 out.go:177]   - env NO_PROXY=192.169.0.16
	I0425 12:24:49.068506    5272 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:24:49.069097    5272 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0425 12:24:49.073476    5272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:24:49.083711    5272 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:24:49.083873    5272 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:24:49.084112    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:49.084128    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:49.094223    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52397
	I0425 12:24:49.094525    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:49.094848    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:49.094866    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:49.095093    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:49.095196    5272 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:24:49.095282    5272 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:24:49.095351    5272 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:24:49.096296    5272 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:24:49.096553    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:49.096568    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:49.105186    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52399
	I0425 12:24:49.105591    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:49.105908    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:49.105919    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:49.106139    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:49.106256    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:24:49.106352    5272 certs.go:68] Setting up /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000 for IP: 192.169.0.17
	I0425 12:24:49.106358    5272 certs.go:194] generating shared ca certs ...
	I0425 12:24:49.106369    5272 certs.go:226] acquiring lock for ca certs: {Name:mk1f3cabc8bfb1fa57eb09572b98c6852173235a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:24:49.106515    5272 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key
	I0425 12:24:49.106583    5272 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key
	I0425 12:24:49.106592    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 12:24:49.106616    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 12:24:49.106635    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 12:24:49.106652    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 12:24:49.106735    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem (1338 bytes)
	W0425 12:24:49.106775    5272 certs.go:480] ignoring /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885_empty.pem, impossibly tiny 0 bytes
	I0425 12:24:49.106785    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 12:24:49.106820    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem (1078 bytes)
	I0425 12:24:49.106853    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem (1123 bytes)
	I0425 12:24:49.106881    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem (1675 bytes)
	I0425 12:24:49.107294    5272 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:24:49.107345    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem -> /usr/share/ca-certificates/1885.pem
	I0425 12:24:49.107371    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /usr/share/ca-certificates/18852.pem
	I0425 12:24:49.107392    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:24:49.107417    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 12:24:49.127039    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 12:24:49.146312    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 12:24:49.165936    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 12:24:49.184958    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem --> /usr/share/ca-certificates/1885.pem (1338 bytes)
	I0425 12:24:49.204598    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /usr/share/ca-certificates/18852.pem (1708 bytes)
	I0425 12:24:49.223706    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 12:24:49.244160    5272 ssh_runner.go:195] Run: openssl version
	I0425 12:24:49.248117    5272 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 12:24:49.248331    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18852.pem && ln -fs /usr/share/ca-certificates/18852.pem /etc/ssl/certs/18852.pem"
	I0425 12:24:49.257481    5272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18852.pem
	I0425 12:24:49.260760    5272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:24:49.260869    5272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:24:49.260911    5272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18852.pem
	I0425 12:24:49.264846    5272 command_runner.go:130] > 3ec20f2e
	I0425 12:24:49.265066    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18852.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 12:24:49.273979    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 12:24:49.282947    5272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:24:49.286064    5272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:24:49.286301    5272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:24:49.286335    5272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:24:49.290295    5272 command_runner.go:130] > b5213941
	I0425 12:24:49.290468    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 12:24:49.299605    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1885.pem && ln -fs /usr/share/ca-certificates/1885.pem /etc/ssl/certs/1885.pem"
	I0425 12:24:49.308779    5272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1885.pem
	I0425 12:24:49.312166    5272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:24:49.312327    5272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:24:49.312369    5272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1885.pem
	I0425 12:24:49.316332    5272 command_runner.go:130] > 51391683
	I0425 12:24:49.316505    5272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1885.pem /etc/ssl/certs/51391683.0"
	I0425 12:24:49.325600    5272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 12:24:49.328632    5272 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:24:49.328743    5272 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:24:49.328774    5272 kubeadm.go:928] updating node {m02 192.169.0.17 8443 v1.30.0 docker false true} ...
	I0425 12:24:49.328840    5272 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-034000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 12:24:49.328890    5272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 12:24:49.336925    5272 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0425 12:24:49.336946    5272 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0425 12:24:49.336991    5272 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0425 12:24:49.345159    5272 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0425 12:24:49.345159    5272 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0425 12:24:49.345173    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 12:24:49.345176    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 12:24:49.345159    5272 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0425 12:24:49.345230    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:24:49.345268    5272 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 12:24:49.345268    5272 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 12:24:49.357101    5272 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0425 12:24:49.357134    5272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 12:24:49.357128    5272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0425 12:24:49.357149    5272 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0425 12:24:49.357159    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0425 12:24:49.357176    5272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0425 12:24:49.357193    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0425 12:24:49.357254    5272 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 12:24:49.381989    5272 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0425 12:24:49.382012    5272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0425 12:24:49.382044    5272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0425 12:24:50.055902    5272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0425 12:24:50.063576    5272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0425 12:24:50.077542    5272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 12:24:50.091741    5272 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0425 12:24:50.095202    5272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:24:50.105445    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:50.212521    5272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:24:50.228374    5272 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:24:50.228657    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:24:50.228675    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:24:50.237816    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52401
	I0425 12:24:50.238151    5272 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:24:50.238494    5272 main.go:141] libmachine: Using API Version  1
	I0425 12:24:50.238503    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:24:50.238731    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:24:50.238894    5272 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:24:50.239014    5272 start.go:316] joinCluster: &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:24:50.239091    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 12:24:50.239103    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:24:50.239197    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:24:50.239273    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:24:50.239359    5272 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:24:50.239437    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:24:50.323921    5272 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5aswxk.w4pzsvut1q270y9r --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 
	I0425 12:24:50.323964    5272 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:24:50.324000    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5aswxk.w4pzsvut1q270y9r --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-034000-m02"
	I0425 12:24:50.439593    5272 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 12:24:51.106822    5272 command_runner.go:130] > [preflight] Running pre-flight checks
	I0425 12:24:51.106865    5272 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0425 12:24:51.106886    5272 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0425 12:24:51.106915    5272 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 12:24:51.106925    5272 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 12:24:51.106932    5272 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0425 12:24:51.106942    5272 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 12:24:51.106953    5272 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 503.181704ms
	I0425 12:24:51.106958    5272 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0425 12:24:51.106965    5272 command_runner.go:130] > This node has joined the cluster:
	I0425 12:24:51.106971    5272 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0425 12:24:51.106979    5272 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0425 12:24:51.106986    5272 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0425 12:24:51.107009    5272 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 12:24:51.228629    5272 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0425 12:24:51.336670    5272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-034000-m02 minikube.k8s.io/updated_at=2024_04_25T12_24_51_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=multinode-034000 minikube.k8s.io/primary=false
	I0425 12:24:51.401990    5272 command_runner.go:130] > node/multinode-034000-m02 labeled
	I0425 12:24:51.402985    5272 start.go:318] duration metric: took 1.163934998s to joinCluster
	I0425 12:24:51.403051    5272 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:24:51.426814    5272 out.go:177] * Verifying Kubernetes components...
	I0425 12:24:51.403186    5272 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:24:51.468816    5272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:24:51.571832    5272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:24:51.584548    5272 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:24:51.584753    5272 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf373ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:24:51.584965    5272 node_ready.go:35] waiting up to 6m0s for node "multinode-034000-m02" to be "Ready" ...
	I0425 12:24:51.585011    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:51.585016    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:51.585023    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:51.585027    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:51.586668    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:51.586678    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:51.586683    5272 round_trippers.go:580]     Audit-Id: ca93407b-0f61-4de3-b48d-ab64bcbe9727
	I0425 12:24:51.586685    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:51.586704    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:51.586710    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:51.586714    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:51.586720    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:51.586722    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:51 GMT
	I0425 12:24:51.586782    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:52.085447    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:52.085471    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:52.085482    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:52.085489    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:52.087610    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:52.087628    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:52.087639    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:52.087651    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:52 GMT
	I0425 12:24:52.087659    5272 round_trippers.go:580]     Audit-Id: 04b821d7-877d-4c31-b20d-5d89a8123642
	I0425 12:24:52.087667    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:52.087673    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:52.087677    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:52.087682    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:52.087782    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:52.585380    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:52.585397    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:52.585406    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:52.585412    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:52.587480    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:52.587490    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:52.587509    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:52.587514    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:52.587531    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:52.587537    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:52 GMT
	I0425 12:24:52.587541    5272 round_trippers.go:580]     Audit-Id: 1ffbd017-d6fe-4490-b6f7-b510d3223c82
	I0425 12:24:52.587543    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:52.587545    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:52.587577    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:53.086319    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:53.086365    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:53.086375    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:53.086393    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:53.088437    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:53.088445    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:53.088450    5272 round_trippers.go:580]     Audit-Id: 2960e833-9c7b-4a5a-88f0-42ccbef10a71
	I0425 12:24:53.088453    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:53.088457    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:53.088461    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:53.088463    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:53.088465    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:53.088475    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:53 GMT
	I0425 12:24:53.088529    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:53.585600    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:53.585617    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:53.585625    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:53.585630    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:53.587531    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:53.587546    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:53.587552    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:53.587554    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:53 GMT
	I0425 12:24:53.587557    5272 round_trippers.go:580]     Audit-Id: 295b605e-9c37-477d-8b2f-a8cdfe3b8d41
	I0425 12:24:53.587561    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:53.587564    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:53.587568    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:53.587570    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:53.587645    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:53.587803    5272 node_ready.go:53] node "multinode-034000-m02" has status "Ready":"False"
	I0425 12:24:54.087087    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:54.087119    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:54.087126    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:54.087129    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:54.088782    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:54.088798    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:54.088804    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:54.088808    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:54.088812    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:54.088822    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:54.088824    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:54 GMT
	I0425 12:24:54.088827    5272 round_trippers.go:580]     Audit-Id: bc31ec7b-b44e-4c06-83c3-a1380f6ecde8
	I0425 12:24:54.088831    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:54.088885    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:54.585763    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:54.585805    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:54.585826    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:54.585874    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:54.587426    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:54.587434    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:54.587440    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:54 GMT
	I0425 12:24:54.587442    5272 round_trippers.go:580]     Audit-Id: ac27a167-0a2a-42dc-8f37-6e8a82321e3b
	I0425 12:24:54.587445    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:54.587448    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:54.587450    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:54.587452    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:54.587455    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:54.587508    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:55.086653    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:55.086681    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:55.086688    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:55.086693    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:55.088282    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:55.088293    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:55.088300    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:55.088304    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:55.088308    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:55 GMT
	I0425 12:24:55.088311    5272 round_trippers.go:580]     Audit-Id: cf197e87-c920-4f92-a1f8-e7e0472477f2
	I0425 12:24:55.088318    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:55.088321    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:55.088324    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:55.088370    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:55.585322    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:55.585334    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:55.585340    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:55.585343    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:55.586908    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:55.586919    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:55.586924    5272 round_trippers.go:580]     Audit-Id: 30daf200-e512-49ff-89e0-313f33786b74
	I0425 12:24:55.586927    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:55.586930    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:55.586933    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:55.586936    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:55.586938    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:55.586941    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:55 GMT
	I0425 12:24:55.586990    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:56.086210    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:56.086242    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:56.086250    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:56.086254    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:56.087904    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:56.087933    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:56.087942    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:56.087947    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:56.087951    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:56.087956    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:56.087959    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:56.087966    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:56 GMT
	I0425 12:24:56.087970    5272 round_trippers.go:580]     Audit-Id: 69ab8006-be7f-4102-bdc9-3ccb49049f1b
	I0425 12:24:56.088006    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:56.088164    5272 node_ready.go:53] node "multinode-034000-m02" has status "Ready":"False"
	I0425 12:24:56.586644    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:56.586656    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:56.586663    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:56.586666    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:56.588380    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:56.588389    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:56.588395    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:56.588412    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:56.588418    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:56 GMT
	I0425 12:24:56.588422    5272 round_trippers.go:580]     Audit-Id: f05350a8-df4c-4c19-91bc-db157e965733
	I0425 12:24:56.588424    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:56.588426    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:56.588429    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:56.588490    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:57.085974    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:57.085989    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:57.085996    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:57.086000    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:57.088043    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:57.088058    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:57.088064    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:57.088070    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:57.088073    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:57.088083    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:57.088086    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:57 GMT
	I0425 12:24:57.088088    5272 round_trippers.go:580]     Audit-Id: 99081a58-fe7c-4434-9dab-2fd8f55fc512
	I0425 12:24:57.088091    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:57.088152    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:57.585885    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:57.585906    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:57.585918    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:57.585925    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:57.588384    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:57.588397    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:57.588404    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:57.588409    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:57.588412    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:57.588416    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:57.588420    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:57 GMT
	I0425 12:24:57.588424    5272 round_trippers.go:580]     Audit-Id: 10bd5f96-58b0-4902-8444-7137f2221d21
	I0425 12:24:57.588427    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:57.588496    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:58.085908    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:58.085924    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:58.085932    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:58.085938    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:58.088140    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:24:58.088151    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:58.088156    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:58.088160    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:58.088162    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:58 GMT
	I0425 12:24:58.088165    5272 round_trippers.go:580]     Audit-Id: 5eaeb4bd-5019-4e49-aa68-f2e5ef39aba1
	I0425 12:24:58.088169    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:58.088171    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:58.088174    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:58.088227    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:58.088371    5272 node_ready.go:53] node "multinode-034000-m02" has status "Ready":"False"
	I0425 12:24:58.585823    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:58.585840    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:58.585846    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:58.585850    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:58.587429    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:58.587439    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:58.587444    5272 round_trippers.go:580]     Audit-Id: cf50ea5f-df53-4444-9bdc-18fb40e1981c
	I0425 12:24:58.587446    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:58.587449    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:58.587454    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:58.587456    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:58.587459    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:58.587471    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:58 GMT
	I0425 12:24:58.587641    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:59.085399    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:59.085413    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:59.085420    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:59.085423    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:59.087144    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:59.087155    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:59.087161    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:59.087164    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:59.087168    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:59.087170    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:59.087183    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:59 GMT
	I0425 12:24:59.087187    5272 round_trippers.go:580]     Audit-Id: 72ac1412-37e5-4a2e-b81a-967b255f9b00
	I0425 12:24:59.087189    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:59.087239    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:24:59.586377    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:24:59.586426    5272 round_trippers.go:469] Request Headers:
	I0425 12:24:59.586435    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:24:59.586438    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:24:59.588075    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:24:59.588086    5272 round_trippers.go:577] Response Headers:
	I0425 12:24:59.588091    5272 round_trippers.go:580]     Audit-Id: 2957fa3f-3ab9-4bf6-90bf-69b58e2bbd00
	I0425 12:24:59.588109    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:24:59.588114    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:24:59.588117    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:24:59.588120    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:24:59.588123    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:24:59.588127    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:24:59 GMT
	I0425 12:24:59.588154    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:25:00.086720    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:00.086736    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:00.086744    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:00.086749    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:00.093668    5272 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 12:25:00.093679    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:00.093685    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:00.093689    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:25:00.093725    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:00 GMT
	I0425 12:25:00.093738    5272 round_trippers.go:580]     Audit-Id: 63e138bf-2a36-46ab-b4de-e55d3eec43dd
	I0425 12:25:00.093746    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:00.093751    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:00.093755    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:00.093864    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:25:00.094067    5272 node_ready.go:53] node "multinode-034000-m02" has status "Ready":"False"
	I0425 12:25:00.587309    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:00.587326    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:00.587331    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:00.587334    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:00.588950    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:00.588963    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:00.588970    5272 round_trippers.go:580]     Audit-Id: 5c870601-799b-4440-8646-8a1d8927424e
	I0425 12:25:00.588977    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:00.588981    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:00.588986    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:00.588992    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:00.588995    5272 round_trippers.go:580]     Content-Length: 4087
	I0425 12:25:00.588998    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:00 GMT
	I0425 12:25:00.589082    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"463","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3063 chars]
	I0425 12:25:01.087212    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:01.087230    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:01.087261    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:01.087266    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:01.089214    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:01.089227    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:01.089242    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:01.089247    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:01.089252    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:01.089255    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:01.089259    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:01 GMT
	I0425 12:25:01.089264    5272 round_trippers.go:580]     Audit-Id: 51dc6526-82d7-4787-a12e-d7e5421d2d52
	I0425 12:25:01.089411    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"489","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3396 chars]
	I0425 12:25:01.586211    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:01.586224    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:01.586231    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:01.586234    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:01.587599    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:01.587609    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:01.587614    5272 round_trippers.go:580]     Audit-Id: 9392750a-f887-49f4-a9ea-2f68191c244f
	I0425 12:25:01.587617    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:01.587621    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:01.587623    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:01.587627    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:01.587629    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:01 GMT
	I0425 12:25:01.587715    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"489","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3396 chars]
	I0425 12:25:02.087196    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:02.087241    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:02.087250    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:02.087254    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:02.089478    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:02.089496    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:02.089516    5272 round_trippers.go:580]     Audit-Id: 6216396a-7e22-4aba-8612-4e47c9939223
	I0425 12:25:02.089520    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:02.089522    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:02.089525    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:02.089528    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:02.089531    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:02 GMT
	I0425 12:25:02.089883    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"489","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3396 chars]
	I0425 12:25:02.585996    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:02.586010    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:02.586017    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:02.586019    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:02.587329    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:02.587339    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:02.587345    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:02.587348    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:02.587351    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:02 GMT
	I0425 12:25:02.587353    5272 round_trippers.go:580]     Audit-Id: 5a4b4814-6c6d-42a9-a578-539f1ddf338a
	I0425 12:25:02.587356    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:02.587359    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:02.587562    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"489","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3396 chars]
	I0425 12:25:02.587721    5272 node_ready.go:53] node "multinode-034000-m02" has status "Ready":"False"
	I0425 12:25:03.085539    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:03.085553    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:03.085562    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:03.085566    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:03.087199    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:03.087211    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:03.087220    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:03.087225    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:03.087231    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:03.087236    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:03 GMT
	I0425 12:25:03.087240    5272 round_trippers.go:580]     Audit-Id: 5b6800fc-4c4b-4648-92be-ce262eab9eaa
	I0425 12:25:03.087243    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:03.087400    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"489","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3396 chars]
	I0425 12:25:03.585656    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:03.585670    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:03.585676    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:03.585680    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:03.587740    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:03.587751    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:03.587756    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:03.587763    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:03.587767    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:03.587769    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:03 GMT
	I0425 12:25:03.587772    5272 round_trippers.go:580]     Audit-Id: e3423525-a452-47ef-9fe9-a2c5ed46ec4a
	I0425 12:25:03.587774    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:03.588021    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"489","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3396 chars]
	I0425 12:25:04.086208    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:04.086225    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.086233    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.086237    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.088047    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.088074    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.088087    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.088094    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.088099    5272 round_trippers.go:580]     Audit-Id: c3fdd66d-0bda-4f4b-8062-19dc9b48d713
	I0425 12:25:04.088105    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.088109    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.088113    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.088293    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"495","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3262 chars]
	I0425 12:25:04.088457    5272 node_ready.go:49] node "multinode-034000-m02" has status "Ready":"True"
	I0425 12:25:04.088465    5272 node_ready.go:38] duration metric: took 12.503116293s for node "multinode-034000-m02" to be "Ready" ...
	I0425 12:25:04.088471    5272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:25:04.088499    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:25:04.088504    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.088509    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.088513    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.090387    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.090410    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.090422    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.090428    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.090431    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.090436    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.090440    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.090445    5272 round_trippers.go:580]     Audit-Id: 5d7e7cae-2b9a-4c8d-b0ce-63adff81397f
	I0425 12:25:04.091177    5272 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"495"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"411","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70370 chars]
	I0425 12:25:04.092795    5272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.092841    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:25:04.092846    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.092852    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.092855    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.093934    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.093943    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.093950    5272 round_trippers.go:580]     Audit-Id: 1b62cab5-aef1-475c-9d8a-bfa938fae8ef
	I0425 12:25:04.093956    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.093966    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.093973    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.093976    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.093979    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.094191    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"411","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0425 12:25:04.094435    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:04.094443    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.094447    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.094450    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.095358    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:25:04.095365    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.095371    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.095391    5272 round_trippers.go:580]     Audit-Id: 9b024d53-723a-4368-8170-7cdaac9fa855
	I0425 12:25:04.095397    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.095400    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.095402    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.095405    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.095589    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0425 12:25:04.095755    5272 pod_ready.go:92] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:04.095762    5272 pod_ready.go:81] duration metric: took 2.957677ms for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.095767    5272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.095794    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:25:04.095798    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.095803    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.095808    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.096758    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:25:04.096766    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.096771    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.096774    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.096778    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.096789    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.096794    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.096796    5272 round_trippers.go:580]     Audit-Id: 5a8d77e9-e6f5-400c-9f04-55aafc9d90c7
	I0425 12:25:04.096959    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"295","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0425 12:25:04.097173    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:04.097180    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.097184    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.097187    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.098103    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:25:04.098114    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.098121    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.098126    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.098131    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.098136    5272 round_trippers.go:580]     Audit-Id: bba7f595-7461-4e4a-b018-5e5d6f5f51cd
	I0425 12:25:04.098139    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.098143    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.098338    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0425 12:25:04.098542    5272 pod_ready.go:92] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:04.098549    5272 pod_ready.go:81] duration metric: took 2.77725ms for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.098561    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.098589    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-034000
	I0425 12:25:04.098593    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.098598    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.098602    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.099614    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.099625    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.099633    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.099637    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.099647    5272 round_trippers.go:580]     Audit-Id: 6f7a9414-ff33-4b47-bfb4-6c686b5f2107
	I0425 12:25:04.099651    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.099653    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.099656    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.099819    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-034000","namespace":"kube-system","uid":"d142ad34-9a12-42f9-b92d-e0f968eaaa14","resourceVersion":"278","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.mirror":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.seen":"2024-04-25T19:24:03.349967563Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0425 12:25:04.100053    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:04.100059    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.100065    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.100077    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.101164    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.101173    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.101179    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.101186    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.101190    5272 round_trippers.go:580]     Audit-Id: 7d340071-ee44-44ea-a9ec-32aaa0954d77
	I0425 12:25:04.101193    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.101196    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.101201    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.101375    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0425 12:25:04.101543    5272 pod_ready.go:92] pod "kube-apiserver-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:04.101556    5272 pod_ready.go:81] duration metric: took 2.984943ms for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.101563    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.101596    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-034000
	I0425 12:25:04.101600    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.101606    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.101610    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.102688    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.102694    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.102698    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.102701    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.102709    5272 round_trippers.go:580]     Audit-Id: 8edb9e66-3da8-43f9-968d-2b8f64a15f4f
	I0425 12:25:04.102713    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.102716    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.102721    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.102897    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-034000","namespace":"kube-system","uid":"19072fbe-3cb2-4b92-bd98-b549daec4cf2","resourceVersion":"305","creationTimestamp":"2024-04-25T19:24:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.mirror":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.seen":"2024-04-25T19:23:58.495195502Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0425 12:25:04.103134    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:04.103140    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.103154    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.103159    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.104091    5272 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:25:04.104100    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.104110    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.104115    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.104119    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.104122    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.104126    5272 round_trippers.go:580]     Audit-Id: 8c31d783-ae79-41f8-a3ca-b90338ce409d
	I0425 12:25:04.104128    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.104266    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0425 12:25:04.104442    5272 pod_ready.go:92] pod "kube-controller-manager-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:04.104449    5272 pod_ready.go:81] duration metric: took 2.881638ms for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.104455    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.286273    5272 request.go:629] Waited for 181.772013ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:25:04.286424    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:25:04.286436    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.286448    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.286455    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.289457    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:04.289474    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.289484    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.289487    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.289491    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.289517    5272 round_trippers.go:580]     Audit-Id: 5684fd73-a01b-4239-a791-bfcf01bacea8
	I0425 12:25:04.289525    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.289533    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.289636    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmspl","generateName":"kube-proxy-","namespace":"kube-system","uid":"b0f6c7c8-ef54-4c63-9de2-05e01ace3e15","resourceVersion":"380","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0425 12:25:04.486363    5272 request.go:629] Waited for 196.344508ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:04.486428    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:04.486436    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.486447    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.486487    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.488965    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:04.488980    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.488986    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.488992    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.489005    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.489010    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.489013    5272 round_trippers.go:580]     Audit-Id: c3f3f966-3712-48e1-89a1-f3ccd0e0fdb5
	I0425 12:25:04.489023    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.489415    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0425 12:25:04.489670    5272 pod_ready.go:92] pod "kube-proxy-gmspl" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:04.489683    5272 pod_ready.go:81] duration metric: took 385.210639ms for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.489696    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.686299    5272 request.go:629] Waited for 196.55565ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:25:04.686361    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:25:04.686367    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.686373    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.686377    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.688728    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:04.688740    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.688747    5272 round_trippers.go:580]     Audit-Id: 8c71e430-404f-4bae-abc3-6e027ee71b27
	I0425 12:25:04.688752    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.688757    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.688761    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.688768    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.688772    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:04 GMT
	I0425 12:25:04.688952    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mp7qm","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc106198-3317-44e2-b1a7-cc5eac6dcadc","resourceVersion":"479","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0425 12:25:04.886455    5272 request.go:629] Waited for 197.211749ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:04.886497    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:25:04.886503    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:04.886510    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:04.886516    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:04.888284    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:04.888294    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:04.888301    5272 round_trippers.go:580]     Audit-Id: 7c468bad-6f59-4914-b204-df85076387a2
	I0425 12:25:04.888305    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:04.888309    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:04.888329    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:04.888332    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:04.888334    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:05 GMT
	I0425 12:25:04.888439    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"495","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3262 chars]
	I0425 12:25:04.888606    5272 pod_ready.go:92] pod "kube-proxy-mp7qm" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:04.888615    5272 pod_ready.go:81] duration metric: took 398.897141ms for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:04.888623    5272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:05.087453    5272 request.go:629] Waited for 198.788853ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:25:05.087495    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:25:05.087503    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:05.087511    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:05.087516    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:05.089348    5272 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:25:05.089364    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:05.089372    5272 round_trippers.go:580]     Audit-Id: bad9794e-4b3c-46cc-ac01-3e8f743e841c
	I0425 12:25:05.089377    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:05.089382    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:05.089385    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:05.089420    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:05.089430    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:05 GMT
	I0425 12:25:05.089651    5272 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-034000","namespace":"kube-system","uid":"889fb9d4-d8d9-4a92-be22-d0ab1518bc93","resourceVersion":"274","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.mirror":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.seen":"2024-04-25T19:24:03.349969029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0425 12:25:05.286888    5272 request.go:629] Waited for 196.968738ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:05.286998    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:25:05.287009    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:05.287020    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:05.287029    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:05.289465    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:05.289479    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:05.289489    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:05.289495    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:05.289499    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:05.289504    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:05 GMT
	I0425 12:25:05.289509    5272 round_trippers.go:580]     Audit-Id: 936e37cb-89a7-420b-9f8c-a8bf3abffb72
	I0425 12:25:05.289515    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:05.289862    5272 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0425 12:25:05.290102    5272 pod_ready.go:92] pod "kube-scheduler-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:25:05.290114    5272 pod_ready.go:81] duration metric: took 401.473041ms for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:25:05.290123    5272 pod_ready.go:38] duration metric: took 1.201609034s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:25:05.290146    5272 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 12:25:05.290208    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:25:05.301882    5272 system_svc.go:56] duration metric: took 11.731305ms WaitForService to wait for kubelet
	I0425 12:25:05.301898    5272 kubeadm.go:576] duration metric: took 13.898409733s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:25:05.301918    5272 node_conditions.go:102] verifying NodePressure condition ...
	I0425 12:25:05.487134    5272 request.go:629] Waited for 185.119782ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0425 12:25:05.487183    5272 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0425 12:25:05.487192    5272 round_trippers.go:469] Request Headers:
	I0425 12:25:05.487240    5272 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:25:05.487263    5272 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:25:05.489436    5272 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:25:05.489450    5272 round_trippers.go:577] Response Headers:
	I0425 12:25:05.489457    5272 round_trippers.go:580]     Audit-Id: f8141b4f-4cb1-4bc1-b1b7-8fcf6091e19b
	I0425 12:25:05.489461    5272 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:25:05.489466    5272 round_trippers.go:580]     Content-Type: application/json
	I0425 12:25:05.489470    5272 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:25:05.489477    5272 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:25:05.489480    5272 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:25:05 GMT
	I0425 12:25:05.489614    5272 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"421","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9265 chars]
	I0425 12:25:05.491451    5272 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:25:05.491469    5272 node_conditions.go:123] node cpu capacity is 2
	I0425 12:25:05.491476    5272 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:25:05.491480    5272 node_conditions.go:123] node cpu capacity is 2
	I0425 12:25:05.491496    5272 node_conditions.go:105] duration metric: took 189.566662ms to run NodePressure ...
	I0425 12:25:05.491507    5272 start.go:240] waiting for startup goroutines ...
	I0425 12:25:05.491528    5272 start.go:254] writing updated cluster config ...
	I0425 12:25:05.492251    5272 ssh_runner.go:195] Run: rm -f paused
	I0425 12:25:05.538257    5272 start.go:600] kubectl: 1.29.2, cluster: 1.30.0 (minor skew: 1)
	I0425 12:25:05.581793    5272 out.go:177] * Done! kubectl is now configured to use "multinode-034000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.353150128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.357734064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.357829251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.357856472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.358000978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:24:26 multinode-034000 cri-dockerd[1082]: time="2024-04-25T19:24:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4035615d6144351f0f65c9b702c06f70f8ea6acfc339ad751d4f244189aed65d/resolv.conf as [nameserver 192.169.0.1]"
	Apr 25 19:24:26 multinode-034000 cri-dockerd[1082]: time="2024-04-25T19:24:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6ceafd789a01ef57b6be621d1b802c5c8f7b67b0e328cb1302a71269263b4bc1/resolv.conf as [nameserver 192.169.0.1]"
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.482328513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.482374858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.482386318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.482451673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.567367117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.567415438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.567426381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:24:26 multinode-034000 dockerd[1186]: time="2024-04-25T19:24:26.567490309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:25:07 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:07.505441149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:25:07 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:07.505492727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:25:07 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:07.505504803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:25:07 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:07.505622930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:25:07 multinode-034000 cri-dockerd[1082]: time="2024-04-25T19:25:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/716dc468b6bee6cbff5e11ad48e5cfc44d3f2010ce8dbb1270e0f9e345ee6080/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 25 19:25:08 multinode-034000 cri-dockerd[1082]: time="2024-04-25T19:25:08Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 25 19:25:08 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:08.682381664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:25:08 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:08.682491732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:25:08 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:08.682504191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:25:08 multinode-034000 dockerd[1186]: time="2024-04-25T19:25:08.682579163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	223e2c1e65bef       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   716dc468b6bee       busybox-fc5497c4f-hkq6z
	d1a679398f5dd       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   0                   6ceafd789a01e       coredns-7db6d8ff4d-w5z5l
	5a723e5001a43       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       0                   4035615d61443       storage-provisioner
	6bbf310089edb       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              3 minutes ago       Running             kindnet-cni               0                   f9e2b9879728a       kindnet-7ktv2
	573591286ee6c       a0bf559e280cf                                                                                         3 minutes ago       Running             kube-proxy                0                   2dd0e2cf1dfae       kube-proxy-gmspl
	03ce4bf6442bc       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   605cbae9d65dd       etcd-multinode-034000
	a3c296be16a7a       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            0                   351cbbeac7eae       kube-scheduler-multinode-034000
	5d1799046a89c       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   0                   f6c4a60e9a529       kube-controller-manager-multinode-034000
	691ca6c89d9a0       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   46ac0a1db04a9       kube-apiserver-multinode-034000
	
	
	==> coredns [d1a679398f5d] <==
	[INFO] 10.244.0.3:34174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000672s
	[INFO] 10.244.1.2:47840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122116s
	[INFO] 10.244.1.2:44610 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063983s
	[INFO] 10.244.1.2:53825 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074155s
	[INFO] 10.244.1.2:56114 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000041609s
	[INFO] 10.244.1.2:59228 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055104s
	[INFO] 10.244.1.2:45598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217219s
	[INFO] 10.244.1.2:48480 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079958s
	[INFO] 10.244.1.2:34039 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057072s
	[INFO] 10.244.0.3:51389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110967s
	[INFO] 10.244.0.3:49029 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057828s
	[INFO] 10.244.0.3:49200 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081092s
	[INFO] 10.244.0.3:45787 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090854s
	[INFO] 10.244.1.2:44075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109399s
	[INFO] 10.244.1.2:43218 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076398s
	[INFO] 10.244.1.2:48145 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077478s
	[INFO] 10.244.1.2:46778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094212s
	[INFO] 10.244.0.3:42740 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096877s
	[INFO] 10.244.0.3:39459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006717s
	[INFO] 10.244.0.3:44705 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104604s
	[INFO] 10.244.0.3:45646 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000094006s
	[INFO] 10.244.1.2:52301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078287s
	[INFO] 10.244.1.2:41666 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089165s
	[INFO] 10.244.1.2:45151 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063309s
	[INFO] 10.244.1.2:53549 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000056035s
	
	
	==> describe nodes <==
	Name:               multinode-034000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-034000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-034000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T12_24_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:24:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-034000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:25:35 +0000   Thu, 25 Apr 2024 19:24:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:25:35 +0000   Thu, 25 Apr 2024 19:24:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:25:35 +0000   Thu, 25 Apr 2024 19:24:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:25:35 +0000   Thu, 25 Apr 2024 19:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.16
	  Hostname:    multinode-034000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a04f849885354b69abffad3d85c18563
	  System UUID:                e4584236-0000-0000-8047-fdddf635d073
	  Boot ID:                    bce7e869-132a-42ed-8fda-3edbf14cf29f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkq6z                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 coredns-7db6d8ff4d-w5z5l                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m45s
	  kube-system                 etcd-multinode-034000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m59s
	  kube-system                 kindnet-7ktv2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m45s
	  kube-system                 kube-apiserver-multinode-034000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-multinode-034000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-gmspl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-scheduler-multinode-034000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m44s  kube-proxy       
	  Normal  Starting                 3m59s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m59s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m59s  kubelet          Node multinode-034000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s  kubelet          Node multinode-034000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s  kubelet          Node multinode-034000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m46s  node-controller  Node multinode-034000 event: Registered Node multinode-034000 in Controller
	  Normal  NodeReady                3m37s  kubelet          Node multinode-034000 status is now: NodeReady
	
	
	Name:               multinode-034000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-034000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-034000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T12_24_51_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:24:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-034000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:27:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:25:21 +0000   Thu, 25 Apr 2024 19:24:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:25:21 +0000   Thu, 25 Apr 2024 19:24:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:25:21 +0000   Thu, 25 Apr 2024 19:24:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:25:21 +0000   Thu, 25 Apr 2024 19:25:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.17
	  Hostname:    multinode-034000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 39e79e15727842fb95c7c39fa6204075
	  System UUID:                94b448d5-0000-0000-b8c4-70380b6d3376
	  Boot ID:                    95eca24f-c216-49b1-856d-b0ee708e555e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mw494    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 kindnet-gmxwj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m11s
	  kube-system                 kube-proxy-mp7qm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m5s                   kube-proxy       
	  Normal  RegisteredNode           3m11s                  node-controller  Node multinode-034000-m02 event: Registered Node multinode-034000-m02 in Controller
	  Normal  NodeHasSufficientMemory  3m11s (x2 over 3m11s)  kubelet          Node multinode-034000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m11s (x2 over 3m11s)  kubelet          Node multinode-034000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m11s (x2 over 3m11s)  kubelet          Node multinode-034000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m58s                  kubelet          Node multinode-034000-m02 status is now: NodeReady
	
	
	Name:               multinode-034000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-034000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-034000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T12_25_33_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:25:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-034000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:25:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Apr 2024 19:25:45 +0000   Thu, 25 Apr 2024 19:26:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Apr 2024 19:25:45 +0000   Thu, 25 Apr 2024 19:26:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Apr 2024 19:25:45 +0000   Thu, 25 Apr 2024 19:26:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Apr 2024 19:25:45 +0000   Thu, 25 Apr 2024 19:26:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.18
	  Hostname:    multinode-034000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 08a55b32762e46a89b5488a1e67f7148
	  System UUID:                c8494b42-0000-0000-86e6-91828949bf04
	  Boot ID:                    3eadf707-b01a-4904-8e16-e722020f8250
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-spsv9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m29s
	  kube-system                 kube-proxy-d8zc5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m30s (x2 over 2m30s)  kubelet          Node multinode-034000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s (x2 over 2m30s)  kubelet          Node multinode-034000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s (x2 over 2m30s)  kubelet          Node multinode-034000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m26s                  node-controller  Node multinode-034000-m03 event: Registered Node multinode-034000-m03 in Controller
	  Normal  NodeReady                2m17s                  kubelet          Node multinode-034000-m03 status is now: NodeReady
	  Normal  NodeNotReady             86s                    node-controller  Node multinode-034000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +2.242949] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.082819] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +0.100142] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +1.699104] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.267864] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.096515] systemd-fstab-generator[848]: Ignoring "noauto" option for root device
	[  +0.120066] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +2.529186] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	[  +0.087894] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.121817] systemd-fstab-generator[1059]: Ignoring "noauto" option for root device
	[  +0.058878] kauditd_printk_skb: 230 callbacks suppressed
	[  +0.092308] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +4.283450] systemd-fstab-generator[1172]: Ignoring "noauto" option for root device
	[  +2.198041] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.283182] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +5.545532] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +0.052819] kauditd_printk_skb: 51 callbacks suppressed
	[Apr25 19:24] systemd-fstab-generator[1967]: Ignoring "noauto" option for root device
	[  +0.081325] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.049105] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +0.119977] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.982825] kauditd_printk_skb: 60 callbacks suppressed
	[Apr25 19:25] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [03ce4bf6442b] <==
	{"level":"info","ts":"2024-04-25T19:23:59.472527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa switched to configuration voters=(1317664063532327594)"}
	{"level":"info","ts":"2024-04-25T19:23:59.473347Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","added-peer-id":"1249487c082462aa","added-peer-peer-urls":["https://192.169.0.16:2380"]}
	{"level":"info","ts":"2024-04-25T19:23:59.475795Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:23:59.481057Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1249487c082462aa","initial-advertise-peer-urls":["https://192.169.0.16:2380"],"listen-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:23:59.481099Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:23:59.480905Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-25T19:23:59.481121Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-25T19:23:59.622276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-25T19:23:59.622335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-25T19:23:59.622555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgPreVoteResp from 1249487c082462aa at term 1"}
	{"level":"info","ts":"2024-04-25T19:23:59.622594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.622602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.622609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.622614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.628448Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.628761Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-034000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:23:59.629135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:23:59.629625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:23:59.631785Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:23:59.63597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:23:59.63284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T19:23:59.635617Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.648343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.650527Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.663353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	
	
	==> kernel <==
	 19:28:03 up 4 min,  0 users,  load average: 0.25, 0.34, 0.16
	Linux multinode-034000 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6bbf310089ed] <==
	I0425 19:27:21.184849       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:27:31.191739       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:27:31.191773       1 main.go:227] handling current node
	I0425 19:27:31.191780       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:27:31.191785       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:27:31.191872       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:27:31.191899       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:27:41.204827       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:27:41.204901       1 main.go:227] handling current node
	I0425 19:27:41.204920       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:27:41.204934       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:27:41.205006       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:27:41.205045       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:27:51.208736       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:27:51.208887       1 main.go:227] handling current node
	I0425 19:27:51.208934       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:27:51.208952       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:27:51.209107       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:27:51.209191       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:28:01.217780       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:28:01.217852       1 main.go:227] handling current node
	I0425 19:28:01.217870       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:28:01.217883       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:01.217991       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:28:01.218048       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [691ca6c89d9a] <==
	I0425 19:24:02.018908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0425 19:24:02.022672       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0425 19:24:02.023095       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:24:02.353597       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:24:02.382499       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:24:02.465948       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0425 19:24:02.469621       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.16]
	I0425 19:24:02.470410       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:24:02.472875       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0425 19:24:03.058703       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:24:03.504657       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:24:03.512135       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0425 19:24:03.519155       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:24:16.663606       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0425 19:24:17.219552       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0425 19:25:09.418972       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52425: use of closed network connection
	E0425 19:25:09.608180       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52427: use of closed network connection
	E0425 19:25:09.803973       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52429: use of closed network connection
	E0425 19:25:09.994056       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52431: use of closed network connection
	E0425 19:25:10.184994       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52433: use of closed network connection
	E0425 19:25:10.376907       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52435: use of closed network connection
	E0425 19:25:10.718217       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52438: use of closed network connection
	E0425 19:25:10.912454       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52440: use of closed network connection
	E0425 19:25:11.109116       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52442: use of closed network connection
	E0425 19:25:11.308050       1 conn.go:339] Error on socket receive: read tcp 192.169.0.16:8443->192.169.0.1:52444: use of closed network connection
	
	
	==> kube-controller-manager [5d1799046a89] <==
	I0425 19:24:17.647018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.324µs"
	I0425 19:24:26.000137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.041µs"
	I0425 19:24:26.020051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.843µs"
	I0425 19:24:26.260842       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0425 19:24:27.653013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.155µs"
	I0425 19:24:27.668905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.499254ms"
	I0425 19:24:27.670147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.91µs"
	I0425 19:24:51.152497       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-034000-m02\" does not exist"
	I0425 19:24:51.159771       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-034000-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:24:51.265242       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-034000-m02"
	I0425 19:25:04.097706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:25:06.271745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.049102ms"
	I0425 19:25:06.282882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.462574ms"
	I0425 19:25:06.289779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.115163ms"
	I0425 19:25:06.290043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.37µs"
	I0425 19:25:08.175257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.022296ms"
	I0425 19:25:08.175734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.517µs"
	I0425 19:25:08.873029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.132021ms"
	I0425 19:25:08.873366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.969µs"
	I0425 19:25:33.210402       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:25:33.211453       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-034000-m03\" does not exist"
	I0425 19:25:33.227490       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-034000-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:25:36.286256       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-034000-m03"
	I0425 19:25:46.185093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:26:36.307835       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	
	
	==> kube-proxy [573591286ee6] <==
	I0425 19:24:18.434388       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:24:18.449968       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0425 19:24:18.483080       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:24:18.483210       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:24:18.483240       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:24:18.485629       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:24:18.485827       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:24:18.485857       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:24:18.486807       1 config.go:192] "Starting service config controller"
	I0425 19:24:18.487223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:24:18.487260       1 config.go:319] "Starting node config controller"
	I0425 19:24:18.487265       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:24:18.487555       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:24:18.487584       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:24:18.588292       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:24:18.588509       1 shared_informer.go:320] Caches are synced for node config
	I0425 19:24:18.588519       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a3c296be16a7] <==
	W0425 19:24:01.100731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0425 19:24:01.101806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0425 19:24:01.100765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:24:01.101848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:24:01.100501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:24:01.102094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:24:01.101495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:01.102106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:01.906317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:24:01.906382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:24:02.005981       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:24:02.006244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:24:02.058501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:24:02.058581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:24:02.073051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:24:02.073130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:24:02.105993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:02.106104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:02.131193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:02.131270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:02.150710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:02.150900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:02.256463       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:24:02.256557       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0425 19:24:05.393089       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 19:24:27 multinode-034000 kubelet[1974]: I0425 19:24:27.654938    1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.654933101 podStartE2EDuration="10.654933101s" podCreationTimestamp="2024-04-25 19:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-25 19:24:26.6059014 +0000 UTC m=+23.333203902" watchObservedRunningTime="2024-04-25 19:24:27.654933101 +0000 UTC m=+24.382235604"
	Apr 25 19:25:03 multinode-034000 kubelet[1974]: E0425 19:25:03.413089    1974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:25:03 multinode-034000 kubelet[1974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:25:03 multinode-034000 kubelet[1974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:25:03 multinode-034000 kubelet[1974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:25:03 multinode-034000 kubelet[1974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:25:06 multinode-034000 kubelet[1974]: I0425 19:25:06.268255    1974 topology_manager.go:215] "Topology Admit Handler" podUID="06d559a6-e84a-4f26-8980-b56fefca9346" podNamespace="default" podName="busybox-fc5497c4f-hkq6z"
	Apr 25 19:25:06 multinode-034000 kubelet[1974]: W0425 19:25:06.274480    1974 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-034000" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-034000' and this object
	Apr 25 19:25:06 multinode-034000 kubelet[1974]: E0425 19:25:06.274510    1974 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-034000" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-034000' and this object
	Apr 25 19:25:06 multinode-034000 kubelet[1974]: I0425 19:25:06.469295    1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkqt\" (UniqueName: \"kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt\") pod \"busybox-fc5497c4f-hkq6z\" (UID: \"06d559a6-e84a-4f26-8980-b56fefca9346\") " pod="default/busybox-fc5497c4f-hkq6z"
	Apr 25 19:26:03 multinode-034000 kubelet[1974]: E0425 19:26:03.414415    1974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:26:03 multinode-034000 kubelet[1974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:26:03 multinode-034000 kubelet[1974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:26:03 multinode-034000 kubelet[1974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:26:03 multinode-034000 kubelet[1974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:27:03 multinode-034000 kubelet[1974]: E0425 19:27:03.410081    1974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:27:03 multinode-034000 kubelet[1974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:27:03 multinode-034000 kubelet[1974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:27:03 multinode-034000 kubelet[1974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:27:03 multinode-034000 kubelet[1974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:28:03 multinode-034000 kubelet[1974]: E0425 19:28:03.411411    1974 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:28:03 multinode-034000 kubelet[1974]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:28:03 multinode-034000 kubelet[1974]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:28:03 multinode-034000 kubelet[1974]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:28:03 multinode-034000 kubelet[1974]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-034000 -n multinode-034000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-034000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (127.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-034000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-034000
E0425 12:28:17.252756    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-034000: (24.832901947s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-034000 --wait=true -v=8 --alsologtostderr
E0425 12:28:34.190814    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-034000 --wait=true -v=8 --alsologtostderr: exit status 90 (1m15.718715016s)

                                                
                                                
-- stdout --
	* [multinode-034000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-034000" primary control-plane node in "multinode-034000" cluster
	* Restarting existing hyperkit VM for "multinode-034000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:28:29.129855    5772 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:28:29.130137    5772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:28:29.130142    5772 out.go:304] Setting ErrFile to fd 2...
	I0425 12:28:29.130146    5772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:28:29.130321    5772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:28:29.131683    5772 out.go:298] Setting JSON to false
	I0425 12:28:29.154722    5772 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5279,"bootTime":1714068030,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 12:28:29.154845    5772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:28:29.176556    5772 out.go:177] * [multinode-034000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:28:29.218520    5772 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:28:29.218569    5772 notify.go:220] Checking for updates...
	I0425 12:28:29.240318    5772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:28:29.261468    5772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:28:29.282346    5772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:28:29.303424    5772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 12:28:29.324380    5772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:28:29.346278    5772 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:28:29.346435    5772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:28:29.347152    5772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:29.347252    5772 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:29.356890    5772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53068
	I0425 12:28:29.357234    5772 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:29.357647    5772 main.go:141] libmachine: Using API Version  1
	I0425 12:28:29.357656    5772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:29.357867    5772 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:29.357973    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:29.386413    5772 out.go:177] * Using the hyperkit driver based on existing profile
	I0425 12:28:29.407416    5772 start.go:297] selected driver: hyperkit
	I0425 12:28:29.407453    5772 start.go:901] validating driver "hyperkit" against &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:28:29.407691    5772 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:28:29.407875    5772 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:28:29.408083    5772 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18757-1425/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0425 12:28:29.417104    5772 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0425 12:28:29.420911    5772 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:29.420944    5772 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0425 12:28:29.423926    5772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:28:29.423989    5772 cni.go:84] Creating CNI manager for ""
	I0425 12:28:29.423999    5772 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 12:28:29.424063    5772 start.go:340] cluster config:
	{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:28:29.424170    5772 iso.go:125] acquiring lock: {Name:mk776ce15f524979e50f0732af6183703dc958eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:28:29.466269    5772 out.go:177] * Starting "multinode-034000" primary control-plane node in "multinode-034000" cluster
	I0425 12:28:29.487413    5772 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:28:29.487483    5772 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:28:29.487504    5772 cache.go:56] Caching tarball of preloaded images
	I0425 12:28:29.487722    5772 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:28:29.487746    5772 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:28:29.487931    5772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:28:29.488804    5772 start.go:360] acquireMachinesLock for multinode-034000: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:28:29.488920    5772 start.go:364] duration metric: took 92.863µs to acquireMachinesLock for "multinode-034000"
	I0425 12:28:29.488953    5772 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:28:29.489005    5772 fix.go:54] fixHost starting: 
	I0425 12:28:29.489492    5772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:28:29.489520    5772 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:28:29.498385    5772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53070
	I0425 12:28:29.498696    5772 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:28:29.499048    5772 main.go:141] libmachine: Using API Version  1
	I0425 12:28:29.499059    5772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:28:29.499316    5772 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:28:29.499451    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:29.499560    5772 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:28:29.499673    5772 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:28:29.499738    5772 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:28:29.500816    5772 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid 5283 missing from process table
	I0425 12:28:29.500857    5772 fix.go:112] recreateIfNeeded on multinode-034000: state=Stopped err=<nil>
	I0425 12:28:29.500874    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	W0425 12:28:29.500962    5772 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 12:28:29.522340    5772 out.go:177] * Restarting existing hyperkit VM for "multinode-034000" ...
	I0425 12:28:29.543163    5772 main.go:141] libmachine: (multinode-034000) Calling .Start
	I0425 12:28:29.543336    5772 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:28:29.543366    5772 main.go:141] libmachine: (multinode-034000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid
	I0425 12:28:29.544313    5772 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid 5283 missing from process table
	I0425 12:28:29.544326    5772 main.go:141] libmachine: (multinode-034000) DBG | pid 5283 is in state "Stopped"
	I0425 12:28:29.544337    5772 main.go:141] libmachine: (multinode-034000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid...
	I0425 12:28:29.544508    5772 main.go:141] libmachine: (multinode-034000) DBG | Using UUID e458d994-a066-4236-8047-fdddf635d073
	I0425 12:28:29.653956    5772 main.go:141] libmachine: (multinode-034000) DBG | Generated MAC 1e:d3:c3:87:d3:c7
	I0425 12:28:29.653981    5772 main.go:141] libmachine: (multinode-034000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:28:29.654101    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e458d994-a066-4236-8047-fdddf635d073", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0425 12:28:29.654129    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e458d994-a066-4236-8047-fdddf635d073", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0425 12:28:29.654177    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e458d994-a066-4236-8047-fdddf635d073", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage,/Users/jenkins/minikube-integration/1875
7-1425/.minikube/machines/multinode-034000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:28:29.654224    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e458d994-a066-4236-8047-fdddf635d073 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:28:29.654237    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:28:29.655837    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 DEBUG: hyperkit: Pid is 5785
	I0425 12:28:29.656274    5772 main.go:141] libmachine: (multinode-034000) DBG | Attempt 0
	I0425 12:28:29.656295    5772 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:28:29.656357    5772 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5785
	I0425 12:28:29.658393    5772 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:28:29.658455    5772 main.go:141] libmachine: (multinode-034000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0425 12:28:29.658542    5772 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:aa:be:2a:d5:f9:e ID:1,aa:be:2a:d5:f9:e Lease:0x662bffcd}
	I0425 12:28:29.658583    5772 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:46:26:de:d7:8e:2e ID:1,46:26:de:d7:8e:2e Lease:0x662bff76}
	I0425 12:28:29.658603    5772 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662bff37}
	I0425 12:28:29.658616    5772 main.go:141] libmachine: (multinode-034000) DBG | Found match: 1e:d3:c3:87:d3:c7
	I0425 12:28:29.658625    5772 main.go:141] libmachine: (multinode-034000) Calling .GetConfigRaw
	I0425 12:28:29.658669    5772 main.go:141] libmachine: (multinode-034000) DBG | IP: 192.169.0.16
	I0425 12:28:29.659347    5772 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:28:29.659629    5772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:28:29.660136    5772 machine.go:94] provisionDockerMachine start ...
	I0425 12:28:29.660147    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:29.660282    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:29.660401    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:29.660522    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:29.660629    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:29.660729    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:29.660850    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:29.661057    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:29.661065    5772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 12:28:29.663524    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:28:29.714882    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:28:29.715555    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:28:29.715577    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:28:29.715585    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:28:29.715594    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:28:30.094568    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:28:30.094583    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:28:30.209055    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:28:30.209081    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:28:30.209092    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:28:30.209100    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:28:30.209971    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:28:30.209983    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:28:35.446822    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0425 12:28:35.446847    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0425 12:28:35.446860    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0425 12:28:35.470855    5772 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:28:35 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0425 12:28:40.722984    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 12:28:40.722998    5772 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:28:40.723155    5772 buildroot.go:166] provisioning hostname "multinode-034000"
	I0425 12:28:40.723188    5772 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:28:40.723295    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:40.723389    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:40.723497    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:40.723607    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:40.723705    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:40.723852    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:40.724009    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:40.724017    5772 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000 && echo "multinode-034000" | sudo tee /etc/hostname
	I0425 12:28:40.783908    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000
	
	I0425 12:28:40.783926    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:40.784061    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:40.784149    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:40.784247    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:40.784332    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:40.784459    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:40.784597    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:40.784609    5772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:28:40.840612    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:28:40.840631    5772 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:28:40.840648    5772 buildroot.go:174] setting up certificates
	I0425 12:28:40.840656    5772 provision.go:84] configureAuth start
	I0425 12:28:40.840663    5772 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:28:40.840804    5772 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:28:40.840908    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:40.841007    5772 provision.go:143] copyHostCerts
	I0425 12:28:40.841039    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:28:40.841105    5772 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:28:40.841114    5772 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:28:40.841309    5772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:28:40.841522    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:28:40.841564    5772 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:28:40.841569    5772 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:28:40.841674    5772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:28:40.841823    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:28:40.841861    5772 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:28:40.841866    5772 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:28:40.841940    5772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:28:40.842089    5772 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-034000]
	I0425 12:28:40.967190    5772 provision.go:177] copyRemoteCerts
	I0425 12:28:40.967248    5772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:28:40.967265    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:40.967401    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:40.967506    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:40.967598    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:40.967691    5772 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:28:40.998768    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:28:40.998842    5772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:28:41.018681    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:28:41.018748    5772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0425 12:28:41.038474    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:28:41.038533    5772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 12:28:41.058040    5772 provision.go:87] duration metric: took 217.365074ms to configureAuth
	I0425 12:28:41.058054    5772 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:28:41.058220    5772 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:28:41.058234    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:41.058368    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:41.058463    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:41.058557    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:41.058648    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:41.058740    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:41.058857    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:41.058999    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:41.059007    5772 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:28:41.107874    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:28:41.107885    5772 buildroot.go:70] root file system type: tmpfs
	I0425 12:28:41.107953    5772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:28:41.107967    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:41.108095    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:41.108174    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:41.108260    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:41.108337    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:41.108466    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:41.108621    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:41.108662    5772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:28:41.169891    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:28:41.169918    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:41.170047    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:41.170156    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:41.170232    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:41.170337    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:41.170457    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:41.170600    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:41.170612    5772 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:28:42.773205    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:28:42.773219    5772 machine.go:97] duration metric: took 13.112681561s to provisionDockerMachine
	I0425 12:28:42.773227    5772 start.go:293] postStartSetup for "multinode-034000" (driver="hyperkit")
	I0425 12:28:42.773254    5772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:28:42.773282    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:42.773470    5772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:28:42.773483    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:42.773586    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:42.773671    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:42.773757    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:42.773843    5772 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:28:42.805684    5772 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:28:42.808650    5772 command_runner.go:130] > NAME=Buildroot
	I0425 12:28:42.808659    5772 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 12:28:42.808662    5772 command_runner.go:130] > ID=buildroot
	I0425 12:28:42.808666    5772 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 12:28:42.808676    5772 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 12:28:42.808758    5772 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:28:42.808768    5772 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:28:42.808870    5772 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:28:42.809051    5772 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:28:42.809057    5772 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:28:42.809264    5772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:28:42.817822    5772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:28:42.836875    5772 start.go:296] duration metric: took 63.620236ms for postStartSetup
	I0425 12:28:42.836898    5772 fix.go:56] duration metric: took 13.347508859s for fixHost
	I0425 12:28:42.836910    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:42.837046    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:42.837142    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:42.837240    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:42.837324    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:42.837449    5772 main.go:141] libmachine: Using SSH client type: native
	I0425 12:28:42.837589    5772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbc2bb80] 0xbc2e8e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:28:42.837597    5772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0425 12:28:42.887319    5772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073322.834747749
	
	I0425 12:28:42.887330    5772 fix.go:216] guest clock: 1714073322.834747749
	I0425 12:28:42.887335    5772 fix.go:229] Guest: 2024-04-25 12:28:42.834747749 -0700 PDT Remote: 2024-04-25 12:28:42.836901 -0700 PDT m=+13.749543187 (delta=-2.153251ms)
	I0425 12:28:42.887348    5772 fix.go:200] guest clock delta is within tolerance: -2.153251ms
	I0425 12:28:42.887352    5772 start.go:83] releasing machines lock for "multinode-034000", held for 13.398018421s
	I0425 12:28:42.887374    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:42.887497    5772 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:28:42.887586    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:42.887870    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:42.887978    5772 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:28:42.888060    5772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:28:42.888089    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:42.888124    5772 ssh_runner.go:195] Run: cat /version.json
	I0425 12:28:42.888135    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:28:42.888185    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:42.888225    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:28:42.888259    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:42.888325    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:28:42.888346    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:42.888434    5772 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:28:42.888460    5772 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:28:42.888551    5772 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:28:42.916075    5772 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0425 12:28:42.916265    5772 ssh_runner.go:195] Run: systemctl --version
	I0425 12:28:42.966456    5772 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 12:28:42.966590    5772 command_runner.go:130] > systemd 252 (252)
	I0425 12:28:42.966613    5772 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0425 12:28:42.966713    5772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 12:28:42.971526    5772 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 12:28:42.971590    5772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:28:42.971633    5772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:28:42.984837    5772 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0425 12:28:42.984978    5772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:28:42.984986    5772 start.go:494] detecting cgroup driver to use...
	I0425 12:28:42.985093    5772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:28:42.999720    5772 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0425 12:28:43.000026    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:28:43.009034    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:28:43.017883    5772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:28:43.017932    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:28:43.026753    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:28:43.035971    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:28:43.044912    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:28:43.053868    5772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:28:43.062975    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:28:43.071984    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:28:43.080784    5772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:28:43.089843    5772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:28:43.097747    5772 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 12:28:43.097959    5772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:28:43.106197    5772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:28:43.205904    5772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:28:43.224812    5772 start.go:494] detecting cgroup driver to use...
	I0425 12:28:43.224887    5772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:28:43.244452    5772 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0425 12:28:43.245716    5772 command_runner.go:130] > [Unit]
	I0425 12:28:43.245726    5772 command_runner.go:130] > Description=Docker Application Container Engine
	I0425 12:28:43.245730    5772 command_runner.go:130] > Documentation=https://docs.docker.com
	I0425 12:28:43.245735    5772 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0425 12:28:43.245739    5772 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0425 12:28:43.245744    5772 command_runner.go:130] > StartLimitBurst=3
	I0425 12:28:43.245747    5772 command_runner.go:130] > StartLimitIntervalSec=60
	I0425 12:28:43.245761    5772 command_runner.go:130] > [Service]
	I0425 12:28:43.245765    5772 command_runner.go:130] > Type=notify
	I0425 12:28:43.245768    5772 command_runner.go:130] > Restart=on-failure
	I0425 12:28:43.245774    5772 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0425 12:28:43.245783    5772 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0425 12:28:43.245789    5772 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0425 12:28:43.245794    5772 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0425 12:28:43.245804    5772 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0425 12:28:43.245810    5772 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0425 12:28:43.245820    5772 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0425 12:28:43.245830    5772 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0425 12:28:43.245835    5772 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0425 12:28:43.245840    5772 command_runner.go:130] > ExecStart=
	I0425 12:28:43.245852    5772 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0425 12:28:43.245863    5772 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0425 12:28:43.245871    5772 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0425 12:28:43.245880    5772 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0425 12:28:43.245883    5772 command_runner.go:130] > LimitNOFILE=infinity
	I0425 12:28:43.245887    5772 command_runner.go:130] > LimitNPROC=infinity
	I0425 12:28:43.245890    5772 command_runner.go:130] > LimitCORE=infinity
	I0425 12:28:43.245895    5772 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0425 12:28:43.245900    5772 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0425 12:28:43.245909    5772 command_runner.go:130] > TasksMax=infinity
	I0425 12:28:43.245913    5772 command_runner.go:130] > TimeoutStartSec=0
	I0425 12:28:43.245918    5772 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0425 12:28:43.245921    5772 command_runner.go:130] > Delegate=yes
	I0425 12:28:43.245927    5772 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0425 12:28:43.245930    5772 command_runner.go:130] > KillMode=process
	I0425 12:28:43.245933    5772 command_runner.go:130] > [Install]
	I0425 12:28:43.245956    5772 command_runner.go:130] > WantedBy=multi-user.target
	I0425 12:28:43.246025    5772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:28:43.258468    5772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:28:43.280212    5772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:28:43.291954    5772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:28:43.305977    5772 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:28:43.329198    5772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:28:43.341134    5772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:28:43.369993    5772 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0425 12:28:43.370399    5772 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:28:43.373557    5772 command_runner.go:130] > /usr/bin/cri-dockerd
	I0425 12:28:43.373671    5772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:28:43.381826    5772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:28:43.395242    5772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:28:43.491127    5772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:28:43.593627    5772 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:28:43.593695    5772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:28:43.607527    5772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:28:43.701172    5772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:29:44.619567    5772 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0425 12:29:44.619581    5772 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0425 12:29:44.619594    5772 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.916577663s)
	I0425 12:29:44.619649    5772 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0425 12:29:44.629631    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 systemd[1]: Starting Docker Application Container Engine...
	I0425 12:29:44.629643    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.413609642Z" level=info msg="Starting up"
	I0425 12:29:44.629651    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.414214927Z" level=info msg="containerd not running, starting managed containerd"
	I0425 12:29:44.629668    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.414817994Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=505
	I0425 12:29:44.629677    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.432451286Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0425 12:29:44.629686    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447816951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0425 12:29:44.629697    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447838731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0425 12:29:44.629705    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447878366Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0425 12:29:44.629714    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447888679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629724    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448067662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0425 12:29:44.629732    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448107086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629752    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448218789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0425 12:29:44.629761    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448253268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629770    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448264861Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0425 12:29:44.629779    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448272120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629788    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448367106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629797    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448568071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629812    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450599614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0425 12:29:44.629821    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450652224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0425 12:29:44.629903    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450794991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0425 12:29:44.629917    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450844740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0425 12:29:44.629927    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451027693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0425 12:29:44.629935    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451094641Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0425 12:29:44.629954    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451108247Z" level=info msg="metadata content store policy set" policy=shared
	I0425 12:29:44.629964    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452810340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0425 12:29:44.629974    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452860984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0425 12:29:44.629982    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452875018Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0425 12:29:44.629990    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452885009Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0425 12:29:44.629998    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452893801Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0425 12:29:44.630008    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452936252Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0425 12:29:44.630017    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453110390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0425 12:29:44.630026    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453207546Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0425 12:29:44.630035    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453244479Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0425 12:29:44.630044    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453256195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0425 12:29:44.630053    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453295933Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630063    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453308871Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630072    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453317981Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630081    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453329596Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630090    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453339164Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630106    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453347312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630116    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453355369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630232    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453366247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0425 12:29:44.630244    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453379728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630253    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453389152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630261    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453399108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630270    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453408368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630279    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453417159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630287    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453425164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630296    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453432528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630304    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453440558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630313    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453448659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630322    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453480170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630330    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453490810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630339    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453499031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630347    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453506727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630356    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453517082Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0425 12:29:44.630365    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453529885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630373    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453539246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630382    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453550247Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0425 12:29:44.630391    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453578655Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0425 12:29:44.630401    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453614690Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0425 12:29:44.630411    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453624731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0425 12:29:44.630600    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453631552Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0425 12:29:44.630614    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453664706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0425 12:29:44.630630    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453675431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0425 12:29:44.630639    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453684366Z" level=info msg="NRI interface is disabled by configuration."
	I0425 12:29:44.630647    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453816655Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0425 12:29:44.630655    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453871022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0425 12:29:44.630663    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453898320Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0425 12:29:44.630672    5772 command_runner.go:130] > Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453909958Z" level=info msg="containerd successfully booted in 0.022264s"
	I0425 12:29:44.630680    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.436591263Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0425 12:29:44.630687    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.463905627Z" level=info msg="Loading containers: start."
	I0425 12:29:44.630697    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.650706039Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0425 12:29:44.630707    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.686661323Z" level=info msg="Loading containers: done."
	I0425 12:29:44.630716    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.693679040Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0425 12:29:44.630729    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.693781584Z" level=info msg="Daemon has completed initialization"
	I0425 12:29:44.630737    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.716656650Z" level=info msg="API listen on /var/run/docker.sock"
	I0425 12:29:44.630745    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.716786372Z" level=info msg="API listen on [::]:2376"
	I0425 12:29:44.630751    5772 command_runner.go:130] > Apr 25 19:28:42 multinode-034000 systemd[1]: Started Docker Application Container Engine.
	I0425 12:29:44.630756    5772 command_runner.go:130] > Apr 25 19:28:43 multinode-034000 systemd[1]: Stopping Docker Application Container Engine...
	I0425 12:29:44.630764    5772 command_runner.go:130] > Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.677359426Z" level=info msg="Processing signal 'terminated'"
	I0425 12:29:44.630772    5772 command_runner.go:130] > Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678513896Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0425 12:29:44.630779    5772 command_runner.go:130] > Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678746442Z" level=info msg="Daemon shutdown complete"
	I0425 12:29:44.630796    5772 command_runner.go:130] > Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678776830Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0425 12:29:44.630804    5772 command_runner.go:130] > Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678790272Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0425 12:29:44.630810    5772 command_runner.go:130] > Apr 25 19:28:44 multinode-034000 systemd[1]: docker.service: Deactivated successfully.
	I0425 12:29:44.630815    5772 command_runner.go:130] > Apr 25 19:28:44 multinode-034000 systemd[1]: Stopped Docker Application Container Engine.
	I0425 12:29:44.630821    5772 command_runner.go:130] > Apr 25 19:28:44 multinode-034000 systemd[1]: Starting Docker Application Container Engine...
	I0425 12:29:44.630828    5772 command_runner.go:130] > Apr 25 19:28:44 multinode-034000 dockerd[823]: time="2024-04-25T19:28:44.732281135Z" level=info msg="Starting up"
	I0425 12:29:44.630837    5772 command_runner.go:130] > Apr 25 19:29:44 multinode-034000 dockerd[823]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0425 12:29:44.630877    5772 command_runner.go:130] > Apr 25 19:29:44 multinode-034000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0425 12:29:44.630884    5772 command_runner.go:130] > Apr 25 19:29:44 multinode-034000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0425 12:29:44.630890    5772 command_runner.go:130] > Apr 25 19:29:44 multinode-034000 systemd[1]: Failed to start Docker Application Container Engine.
	I0425 12:29:44.655699    5772 out.go:177] 
	W0425 12:29:44.677571    5772 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 25 19:28:41 multinode-034000 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.413609642Z" level=info msg="Starting up"
	Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.414214927Z" level=info msg="containerd not running, starting managed containerd"
	Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.414817994Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=505
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.432451286Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447816951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447838731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447878366Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447888679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448067662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448107086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448218789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448253268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448264861Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448272120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448367106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448568071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450599614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450652224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450794991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450844740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451027693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451094641Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451108247Z" level=info msg="metadata content store policy set" policy=shared
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452810340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452860984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452875018Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452885009Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452893801Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452936252Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453110390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453207546Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453244479Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453256195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453295933Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453308871Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453317981Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453329596Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453339164Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453347312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453355369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453366247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453379728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453389152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453399108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453408368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453417159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453425164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453432528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453440558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453448659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453480170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453490810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453499031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453506727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453517082Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453529885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453539246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453550247Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453578655Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453614690Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453624731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453631552Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453664706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453675431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453684366Z" level=info msg="NRI interface is disabled by configuration."
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453816655Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453871022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453898320Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453909958Z" level=info msg="containerd successfully booted in 0.022264s"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.436591263Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.463905627Z" level=info msg="Loading containers: start."
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.650706039Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.686661323Z" level=info msg="Loading containers: done."
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.693679040Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.693781584Z" level=info msg="Daemon has completed initialization"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.716656650Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.716786372Z" level=info msg="API listen on [::]:2376"
	Apr 25 19:28:42 multinode-034000 systemd[1]: Started Docker Application Container Engine.
	Apr 25 19:28:43 multinode-034000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.677359426Z" level=info msg="Processing signal 'terminated'"
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678513896Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678746442Z" level=info msg="Daemon shutdown complete"
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678776830Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678790272Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 25 19:28:44 multinode-034000 systemd[1]: docker.service: Deactivated successfully.
	Apr 25 19:28:44 multinode-034000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 25 19:28:44 multinode-034000 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:28:44 multinode-034000 dockerd[823]: time="2024-04-25T19:28:44.732281135Z" level=info msg="Starting up"
	Apr 25 19:29:44 multinode-034000 dockerd[823]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 25 19:29:44 multinode-034000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 25 19:29:44 multinode-034000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 25 19:29:44 multinode-034000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 25 19:28:41 multinode-034000 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.413609642Z" level=info msg="Starting up"
	Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.414214927Z" level=info msg="containerd not running, starting managed containerd"
	Apr 25 19:28:41 multinode-034000 dockerd[499]: time="2024-04-25T19:28:41.414817994Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=505
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.432451286Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447816951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447838731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447878366Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.447888679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448067662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448107086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448218789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448253268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448264861Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448272120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448367106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.448568071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450599614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450652224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450794991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.450844740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451027693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451094641Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.451108247Z" level=info msg="metadata content store policy set" policy=shared
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452810340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452860984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452875018Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452885009Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452893801Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.452936252Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453110390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453207546Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453244479Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453256195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453295933Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453308871Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453317981Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453329596Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453339164Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453347312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453355369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453366247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453379728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453389152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453399108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453408368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453417159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453425164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453432528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453440558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453448659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453480170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453490810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453499031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453506727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453517082Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453529885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453539246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453550247Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453578655Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453614690Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453624731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453631552Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453664706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453675431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453684366Z" level=info msg="NRI interface is disabled by configuration."
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453816655Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453871022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453898320Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 25 19:28:41 multinode-034000 dockerd[505]: time="2024-04-25T19:28:41.453909958Z" level=info msg="containerd successfully booted in 0.022264s"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.436591263Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.463905627Z" level=info msg="Loading containers: start."
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.650706039Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.686661323Z" level=info msg="Loading containers: done."
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.693679040Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.693781584Z" level=info msg="Daemon has completed initialization"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.716656650Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 25 19:28:42 multinode-034000 dockerd[499]: time="2024-04-25T19:28:42.716786372Z" level=info msg="API listen on [::]:2376"
	Apr 25 19:28:42 multinode-034000 systemd[1]: Started Docker Application Container Engine.
	Apr 25 19:28:43 multinode-034000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.677359426Z" level=info msg="Processing signal 'terminated'"
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678513896Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678746442Z" level=info msg="Daemon shutdown complete"
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678776830Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 25 19:28:43 multinode-034000 dockerd[499]: time="2024-04-25T19:28:43.678790272Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 25 19:28:44 multinode-034000 systemd[1]: docker.service: Deactivated successfully.
	Apr 25 19:28:44 multinode-034000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 25 19:28:44 multinode-034000 systemd[1]: Starting Docker Application Container Engine...
	Apr 25 19:28:44 multinode-034000 dockerd[823]: time="2024-04-25T19:28:44.732281135Z" level=info msg="Starting up"
	Apr 25 19:29:44 multinode-034000 dockerd[823]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 25 19:29:44 multinode-034000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 25 19:29:44 multinode-034000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 25 19:29:44 multinode-034000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0425 12:29:44.677690    5772 out.go:239] * 
	* 
	W0425 12:29:44.679008    5772 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 12:29:44.764517    5772 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-034000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-034000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000: exit status 6 (182.788796ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:29:45.007648    5813 status.go:417] kubeconfig endpoint: get endpoint: "multinode-034000" does not appear in /Users/jenkins/minikube-integration/18757-1425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-034000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (100.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 node delete m03: exit status 103 (185.58886ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-034000 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-034000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-034000 node delete m03": exit status 103
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr: exit status 7 (170.65414ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	multinode-034000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-034000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:29:45.299947    5822 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:29:45.300151    5822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:29:45.300157    5822 out.go:304] Setting ErrFile to fd 2...
	I0425 12:29:45.300161    5822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:29:45.300324    5822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:29:45.300485    5822 out.go:298] Setting JSON to false
	I0425 12:29:45.300509    5822 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:29:45.300548    5822 notify.go:220] Checking for updates...
	I0425 12:29:45.300829    5822 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:29:45.300842    5822 status.go:255] checking status of multinode-034000 ...
	I0425 12:29:45.301211    5822 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:29:45.301267    5822 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:29:45.309752    5822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53105
	I0425 12:29:45.310185    5822 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:29:45.310602    5822 main.go:141] libmachine: Using API Version  1
	I0425 12:29:45.310630    5822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:29:45.310833    5822 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:29:45.310973    5822 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:29:45.311064    5822 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:29:45.311133    5822 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5785
	I0425 12:29:45.312086    5822 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:29:45.312107    5822 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:29:45.312364    5822 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:29:45.312384    5822 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:29:45.320598    5822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53107
	I0425 12:29:45.320942    5822 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:29:45.321282    5822 main.go:141] libmachine: Using API Version  1
	I0425 12:29:45.321299    5822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:29:45.321560    5822 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:29:45.321679    5822 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:29:45.321784    5822 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:29:45.322048    5822 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:29:45.322074    5822 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:29:45.330384    5822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53109
	I0425 12:29:45.330715    5822 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:29:45.331094    5822 main.go:141] libmachine: Using API Version  1
	I0425 12:29:45.331115    5822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:29:45.331327    5822 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:29:45.331437    5822 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:29:45.331578    5822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:29:45.331598    5822 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:29:45.331683    5822 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:29:45.331760    5822 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:29:45.331844    5822 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:29:45.331923    5822 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:29:45.361778    5822 ssh_runner.go:195] Run: systemctl --version
	I0425 12:29:45.366030    5822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0425 12:29:45.376256    5822 status.go:417] kubeconfig endpoint: get endpoint: "multinode-034000" does not appear in /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:29:45.376280    5822 api_server.go:166] Checking apiserver status ...
	I0425 12:29:45.376314    5822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0425 12:29:45.385879    5822 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:29:45.385889    5822 status.go:422] multinode-034000 apiserver status = Stopped (err=<nil>)
	I0425 12:29:45.385898    5822 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:29:45.385909    5822 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:29:45.386186    5822 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:29:45.386206    5822 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:29:45.394774    5822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53112
	I0425 12:29:45.395155    5822 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:29:45.395496    5822 main.go:141] libmachine: Using API Version  1
	I0425 12:29:45.395512    5822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:29:45.395768    5822 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:29:45.395898    5822 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:29:45.395997    5822 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:29:45.396084    5822 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:29:45.397038    5822 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid 5309 missing from process table
	I0425 12:29:45.397040    5822 status.go:330] multinode-034000-m02 host status = "Stopped" (err=<nil>)
	I0425 12:29:45.397048    5822 status.go:343] host is not running, skipping remaining checks
	I0425 12:29:45.397054    5822 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:29:45.397067    5822 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:29:45.397322    5822 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:29:45.397343    5822 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:29:45.405687    5822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53114
	I0425 12:29:45.406039    5822 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:29:45.406393    5822 main.go:141] libmachine: Using API Version  1
	I0425 12:29:45.406410    5822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:29:45.406645    5822 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:29:45.406764    5822 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:29:45.406846    5822 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:29:45.406939    5822 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:29:45.407843    5822 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5609 missing from process table
	I0425 12:29:45.407882    5822 status.go:330] multinode-034000-m03 host status = "Stopped" (err=<nil>)
	I0425 12:29:45.407890    5822 status.go:343] host is not running, skipping remaining checks
	I0425 12:29:45.407898    5822 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000: exit status 6 (151.020947ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 12:29:45.549099    5829 status.go:417] kubeconfig endpoint: get endpoint: "multinode-034000" does not appear in /Users/jenkins/minikube-integration/18757-1425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-034000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeleteNode (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (158.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-034000 stop: (2m38.53531179s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status: exit status 7 (97.723425ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-034000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-034000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr: exit status 7 (97.707835ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-034000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-034000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:32:24.259996    5910 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:32:24.260774    5910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:32:24.260856    5910 out.go:304] Setting ErrFile to fd 2...
	I0425 12:32:24.260869    5910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:32:24.261400    5910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:32:24.261592    5910 out.go:298] Setting JSON to false
	I0425 12:32:24.261617    5910 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:32:24.261659    5910 notify.go:220] Checking for updates...
	I0425 12:32:24.261928    5910 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:32:24.261941    5910 status.go:255] checking status of multinode-034000 ...
	I0425 12:32:24.262283    5910 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.262331    5910 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:32:24.271283    5910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53141
	I0425 12:32:24.271625    5910 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:32:24.272030    5910 main.go:141] libmachine: Using API Version  1
	I0425 12:32:24.272040    5910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:32:24.272302    5910 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:32:24.272458    5910 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:32:24.272546    5910 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:32:24.272611    5910 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5785
	I0425 12:32:24.273514    5910 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid 5785 missing from process table
	I0425 12:32:24.273544    5910 status.go:330] multinode-034000 host status = "Stopped" (err=<nil>)
	I0425 12:32:24.273552    5910 status.go:343] host is not running, skipping remaining checks
	I0425 12:32:24.273563    5910 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:32:24.273581    5910 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:32:24.273827    5910 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.273844    5910 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:32:24.282171    5910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53143
	I0425 12:32:24.282493    5910 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:32:24.282876    5910 main.go:141] libmachine: Using API Version  1
	I0425 12:32:24.282895    5910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:32:24.283139    5910 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:32:24.283263    5910 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:32:24.283363    5910 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:32:24.283435    5910 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:32:24.284362    5910 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid 5309 missing from process table
	I0425 12:32:24.284395    5910 status.go:330] multinode-034000-m02 host status = "Stopped" (err=<nil>)
	I0425 12:32:24.284411    5910 status.go:343] host is not running, skipping remaining checks
	I0425 12:32:24.284418    5910 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:32:24.284428    5910 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:32:24.284673    5910 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.284699    5910 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:32:24.293140    5910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53145
	I0425 12:32:24.293485    5910 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:32:24.293826    5910 main.go:141] libmachine: Using API Version  1
	I0425 12:32:24.293839    5910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:32:24.294043    5910 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:32:24.294154    5910 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:32:24.294235    5910 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:32:24.294310    5910 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:32:24.295241    5910 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5609 missing from process table
	I0425 12:32:24.295282    5910 status.go:330] multinode-034000-m03 host status = "Stopped" (err=<nil>)
	I0425 12:32:24.295292    5910 status.go:343] host is not running, skipping remaining checks
	I0425 12:32:24.295298    5910 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr": multinode-034000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-034000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-034000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr": multinode-034000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-034000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-034000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000: exit status 7 (74.66191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (158.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (128.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-034000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0425 12:32:26.188778    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 12:33:34.198849    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-034000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m4.414665303s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr
multinode_test.go:388: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr": 
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:34:28.853161    5993 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:34:28.853463    5993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:34:28.853469    5993 out.go:304] Setting ErrFile to fd 2...
	I0425 12:34:28.853472    5993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:34:28.853671    5993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:34:28.853859    5993 out.go:298] Setting JSON to false
	I0425 12:34:28.853880    5993 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:34:28.853927    5993 notify.go:220] Checking for updates...
	I0425 12:34:28.854190    5993 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:34:28.854202    5993 status.go:255] checking status of multinode-034000 ...
	I0425 12:34:28.854557    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.854599    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.863284    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53260
	I0425 12:34:28.863605    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.864020    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.864035    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.864277    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.864403    5993 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:34:28.864485    5993 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:28.864542    5993 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:34:28.865523    5993 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:34:28.865540    5993 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:28.865781    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.865826    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.874286    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53262
	I0425 12:34:28.874619    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.874944    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.874955    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.875178    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.875284    5993 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:34:28.875362    5993 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:28.875636    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.875658    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.885112    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53264
	I0425 12:34:28.885426    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.885735    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.885746    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.885949    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.886062    5993 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:34:28.886194    5993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:34:28.886214    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:34:28.886312    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:34:28.886386    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:34:28.886462    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:34:28.886556    5993 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:34:28.914966    5993 ssh_runner.go:195] Run: systemctl --version
	I0425 12:34:28.919265    5993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:28.930728    5993 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:34:28.930753    5993 api_server.go:166] Checking apiserver status ...
	I0425 12:34:28.930792    5993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:34:28.943270    5993 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup
	W0425 12:34:28.950990    5993 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:34:28.951035    5993 ssh_runner.go:195] Run: ls
	I0425 12:34:28.954498    5993 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:34:28.957578    5993 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:34:28.957593    5993 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:34:28.957602    5993 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:34:28.957613    5993 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:34:28.957861    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.957885    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.966541    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53268
	I0425 12:34:28.966872    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.967225    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.967243    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.967476    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.967590    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:34:28.967670    5993 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:28.967753    5993 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5949
	I0425 12:34:28.968735    5993 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:34:28.968746    5993 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:34:28.968984    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.969007    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.977430    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53270
	I0425 12:34:28.977759    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.978077    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.978087    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.978304    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.978443    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:34:28.978537    5993 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:34:28.978801    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.978837    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.987075    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53272
	I0425 12:34:28.987416    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.987747    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.987758    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.987978    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.988121    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:34:28.988238    5993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:34:28.988249    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:34:28.988331    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:34:28.988448    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:34:28.988539    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:34:28.988621    5993 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:34:29.017717    5993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:29.028977    5993 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:34:29.029016    5993 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:34:29.029302    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:29.029328    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:29.037896    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53275
	I0425 12:34:29.038235    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:29.038579    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:29.038593    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:29.038806    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:29.038920    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:34:29.039010    5993 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:29.039091    5993 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5980
	I0425 12:34:29.040076    5993 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:34:29.040087    5993 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:34:29.040328    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:29.040356    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:29.048796    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53277
	I0425 12:34:29.049140    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:29.049463    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:29.049477    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:29.049708    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:29.049816    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:34:29.049898    5993 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:34:29.050166    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:29.050192    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:29.058637    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53279
	I0425 12:34:29.058970    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:29.059283    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:29.059291    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:29.059483    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:29.059586    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:29.059734    5993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:34:29.059746    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:29.059824    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:29.059911    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:29.059992    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:29.060068    5993 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:29.091304    5993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:29.101824    5993 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:392: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr": 
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:34:28.853161    5993 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:34:28.853463    5993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:34:28.853469    5993 out.go:304] Setting ErrFile to fd 2...
	I0425 12:34:28.853472    5993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:34:28.853671    5993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:34:28.853859    5993 out.go:298] Setting JSON to false
	I0425 12:34:28.853880    5993 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:34:28.853927    5993 notify.go:220] Checking for updates...
	I0425 12:34:28.854190    5993 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:34:28.854202    5993 status.go:255] checking status of multinode-034000 ...
	I0425 12:34:28.854557    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.854599    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.863284    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53260
	I0425 12:34:28.863605    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.864020    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.864035    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.864277    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.864403    5993 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:34:28.864485    5993 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:28.864542    5993 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:34:28.865523    5993 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:34:28.865540    5993 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:28.865781    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.865826    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.874286    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53262
	I0425 12:34:28.874619    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.874944    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.874955    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.875178    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.875284    5993 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:34:28.875362    5993 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:28.875636    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.875658    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.885112    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53264
	I0425 12:34:28.885426    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.885735    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.885746    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.885949    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.886062    5993 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:34:28.886194    5993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:34:28.886214    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:34:28.886312    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:34:28.886386    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:34:28.886462    5993 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:34:28.886556    5993 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:34:28.914966    5993 ssh_runner.go:195] Run: systemctl --version
	I0425 12:34:28.919265    5993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:28.930728    5993 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:34:28.930753    5993 api_server.go:166] Checking apiserver status ...
	I0425 12:34:28.930792    5993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:34:28.943270    5993 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup
	W0425 12:34:28.950990    5993 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:34:28.951035    5993 ssh_runner.go:195] Run: ls
	I0425 12:34:28.954498    5993 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:34:28.957578    5993 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:34:28.957593    5993 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:34:28.957602    5993 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:34:28.957613    5993 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:34:28.957861    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.957885    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.966541    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53268
	I0425 12:34:28.966872    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.967225    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.967243    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.967476    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.967590    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:34:28.967670    5993 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:28.967753    5993 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5949
	I0425 12:34:28.968735    5993 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:34:28.968746    5993 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:34:28.968984    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.969007    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.977430    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53270
	I0425 12:34:28.977759    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.978077    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.978087    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.978304    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.978443    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:34:28.978537    5993 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:34:28.978801    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:28.978837    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:28.987075    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53272
	I0425 12:34:28.987416    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:28.987747    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:28.987758    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:28.987978    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:28.988121    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:34:28.988238    5993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:34:28.988249    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:34:28.988331    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:34:28.988448    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:34:28.988539    5993 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:34:28.988621    5993 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:34:29.017717    5993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:29.028977    5993 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:34:29.029016    5993 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:34:29.029302    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:29.029328    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:29.037896    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53275
	I0425 12:34:29.038235    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:29.038579    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:29.038593    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:29.038806    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:29.038920    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:34:29.039010    5993 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:29.039091    5993 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5980
	I0425 12:34:29.040076    5993 status.go:330] multinode-034000-m03 host status = "Running" (err=<nil>)
	I0425 12:34:29.040087    5993 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:34:29.040328    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:29.040356    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:29.048796    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53277
	I0425 12:34:29.049140    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:29.049463    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:29.049477    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:29.049708    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:29.049816    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:34:29.049898    5993 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:34:29.050166    5993 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:29.050192    5993 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:29.058637    5993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53279
	I0425 12:34:29.058970    5993 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:29.059283    5993 main.go:141] libmachine: Using API Version  1
	I0425 12:34:29.059291    5993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:29.059483    5993 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:29.059586    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:29.059734    5993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:34:29.059746    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:29.059824    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:29.059911    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:29.059992    5993 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:29.060068    5993 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:29.091304    5993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:29.101824    5993 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:409: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-034000 -n multinode-034000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-034000 logs -n 25: (3.037904021s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000:/home/docker/cp-test_multinode-034000-m02_multinode-034000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000 sudo cat                                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m02_multinode-034000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03:/home/docker/cp-test_multinode-034000-m02_multinode-034000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000-m03 sudo cat                                                                       | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m02_multinode-034000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp testdata/cp-test.txt                                                                                    | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1757473431/001/cp-test_multinode-034000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000:/home/docker/cp-test_multinode-034000-m03_multinode-034000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000 sudo cat                                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m03_multinode-034000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt                                                           | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m02:/home/docker/cp-test_multinode-034000-m03_multinode-034000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n                                                                                                     | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | multinode-034000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-034000 ssh -n multinode-034000-m02 sudo cat                                                                       | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	|         | /home/docker/cp-test_multinode-034000-m03_multinode-034000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-034000 node stop m03                                                                                              | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT | 25 Apr 24 12:25 PDT |
	| node    | multinode-034000 node start                                                                                                 | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:25 PDT |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-034000                                                                                                    | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:28 PDT |                     |
	| stop    | -p multinode-034000                                                                                                         | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:28 PDT | 25 Apr 24 12:28 PDT |
	| start   | -p multinode-034000                                                                                                         | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:28 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-034000                                                                                                    | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:29 PDT |                     |
	| node    | multinode-034000 node delete                                                                                                | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:29 PDT |                     |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-034000 stop                                                                                                       | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:29 PDT | 25 Apr 24 12:32 PDT |
	| start   | -p multinode-034000                                                                                                         | multinode-034000 | jenkins | v1.33.0 | 25 Apr 24 12:32 PDT | 25 Apr 24 12:34 PDT |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 12:32:24
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 12:32:24.431853    5918 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:32:24.432049    5918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:32:24.432054    5918 out.go:304] Setting ErrFile to fd 2...
	I0425 12:32:24.432058    5918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:32:24.432248    5918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:32:24.434206    5918 out.go:298] Setting JSON to false
	I0425 12:32:24.456370    5918 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5514,"bootTime":1714068030,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 12:32:24.456475    5918 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 12:32:24.480260    5918 out.go:177] * [multinode-034000] minikube v1.33.0 on Darwin 14.4.1
	I0425 12:32:24.523084    5918 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 12:32:24.523148    5918 notify.go:220] Checking for updates...
	I0425 12:32:24.565715    5918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:32:24.586793    5918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 12:32:24.607750    5918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 12:32:24.628994    5918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 12:32:24.650012    5918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 12:32:24.671754    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:32:24.672451    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.672519    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:32:24.682131    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53151
	I0425 12:32:24.682474    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:32:24.682898    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:32:24.682913    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:32:24.683135    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:32:24.683267    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:24.683472    5918 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 12:32:24.683709    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.683743    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:32:24.691988    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53153
	I0425 12:32:24.692348    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:32:24.692696    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:32:24.692716    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:32:24.692933    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:32:24.693055    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:24.722016    5918 out.go:177] * Using the hyperkit driver based on existing profile
	I0425 12:32:24.763605    5918 start.go:297] selected driver: hyperkit
	I0425 12:32:24.763626    5918 start.go:901] validating driver "hyperkit" against &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ing
ress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:32:24.763760    5918 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 12:32:24.763864    5918 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:32:24.763978    5918 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18757-1425/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0425 12:32:24.772484    5918 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0425 12:32:24.776276    5918 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.776305    5918 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0425 12:32:24.778951    5918 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:32:24.779011    5918 cni.go:84] Creating CNI manager for ""
	I0425 12:32:24.779020    5918 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 12:32:24.779091    5918 start.go:340] cluster config:
	{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:32:24.779183    5918 iso.go:125] acquiring lock: {Name:mk776ce15f524979e50f0732af6183703dc958eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 12:32:24.800934    5918 out.go:177] * Starting "multinode-034000" primary control-plane node in "multinode-034000" cluster
	I0425 12:32:24.842779    5918 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:32:24.842826    5918 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 12:32:24.842853    5918 cache.go:56] Caching tarball of preloaded images
	I0425 12:32:24.842952    5918 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:32:24.842963    5918 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:32:24.843076    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:32:24.843520    5918 start.go:360] acquireMachinesLock for multinode-034000: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:32:24.843576    5918 start.go:364] duration metric: took 43.771µs to acquireMachinesLock for "multinode-034000"
	I0425 12:32:24.843595    5918 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:32:24.843606    5918 fix.go:54] fixHost starting: 
	I0425 12:32:24.843834    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:32:24.843856    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:32:24.852358    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53155
	I0425 12:32:24.852692    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:32:24.853034    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:32:24.853047    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:32:24.853262    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:32:24.853394    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:24.853486    5918 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:32:24.853569    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:32:24.853647    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5785
	I0425 12:32:24.854557    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid 5785 missing from process table
	I0425 12:32:24.854599    5918 fix.go:112] recreateIfNeeded on multinode-034000: state=Stopped err=<nil>
	I0425 12:32:24.854616    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	W0425 12:32:24.854698    5918 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 12:32:24.896983    5918 out.go:177] * Restarting existing hyperkit VM for "multinode-034000" ...
	I0425 12:32:24.917824    5918 main.go:141] libmachine: (multinode-034000) Calling .Start
	I0425 12:32:24.918171    5918 main.go:141] libmachine: (multinode-034000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid
	I0425 12:32:24.918194    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:32:24.919517    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid 5785 missing from process table
	I0425 12:32:24.919533    5918 main.go:141] libmachine: (multinode-034000) DBG | pid 5785 is in state "Stopped"
	I0425 12:32:24.919553    5918 main.go:141] libmachine: (multinode-034000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid...
	I0425 12:32:24.919862    5918 main.go:141] libmachine: (multinode-034000) DBG | Using UUID e458d994-a066-4236-8047-fdddf635d073
	I0425 12:32:25.083953    5918 main.go:141] libmachine: (multinode-034000) DBG | Generated MAC 1e:d3:c3:87:d3:c7
	I0425 12:32:25.083988    5918 main.go:141] libmachine: (multinode-034000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:32:25.084171    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e458d994-a066-4236-8047-fdddf635d073", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8b40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0425 12:32:25.084218    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e458d994-a066-4236-8047-fdddf635d073", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8b40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0425 12:32:25.084274    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e458d994-a066-4236-8047-fdddf635d073", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage,/Users/jenkins/minikube-integration/1875
7-1425/.minikube/machines/multinode-034000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:32:25.084344    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e458d994-a066-4236-8047-fdddf635d073 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/multinode-034000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:32:25.084362    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:32:25.086187    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 DEBUG: hyperkit: Pid is 5931
	I0425 12:32:25.086621    5918 main.go:141] libmachine: (multinode-034000) DBG | Attempt 0
	I0425 12:32:25.086656    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:32:25.086700    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:32:25.088778    5918 main.go:141] libmachine: (multinode-034000) DBG | Searching for 1e:d3:c3:87:d3:c7 in /var/db/dhcpd_leases ...
	I0425 12:32:25.088853    5918 main.go:141] libmachine: (multinode-034000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0425 12:32:25.088868    5918 main.go:141] libmachine: (multinode-034000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662c0066}
	I0425 12:32:25.088905    5918 main.go:141] libmachine: (multinode-034000) DBG | Found match: 1e:d3:c3:87:d3:c7
	I0425 12:32:25.088925    5918 main.go:141] libmachine: (multinode-034000) DBG | IP: 192.169.0.16
	I0425 12:32:25.088965    5918 main.go:141] libmachine: (multinode-034000) Calling .GetConfigRaw
	I0425 12:32:25.089642    5918 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:32:25.089843    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:32:25.090243    5918 machine.go:94] provisionDockerMachine start ...
	I0425 12:32:25.090253    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:25.090376    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:25.090486    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:25.090611    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:25.090732    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:25.090854    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:25.090991    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:25.091191    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:25.091198    5918 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 12:32:25.093917    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:32:25.146108    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:32:25.146806    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:32:25.146825    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:32:25.146832    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:32:25.146841    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:32:25.522386    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:32:25.522402    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:32:25.637574    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:32:25.637597    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:32:25.637610    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:32:25.637618    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:32:25.638515    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:32:25.638529    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:32:30.891661    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0425 12:32:30.891759    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0425 12:32:30.891768    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0425 12:32:30.917543    5918 main.go:141] libmachine: (multinode-034000) DBG | 2024/04/25 12:32:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0425 12:32:34.675329    5918 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.16:22: connect: connection refused
	I0425 12:32:37.731502    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 12:32:37.731517    5918 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:32:37.731659    5918 buildroot.go:166] provisioning hostname "multinode-034000"
	I0425 12:32:37.731670    5918 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:32:37.731783    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:37.731898    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:37.732030    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:37.732135    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:37.732225    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:37.732389    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:37.732569    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:37.732578    5918 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000 && echo "multinode-034000" | sudo tee /etc/hostname
	I0425 12:32:37.794029    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000
	
	I0425 12:32:37.794050    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:37.794185    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:37.794296    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:37.794404    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:37.794502    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:37.794638    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:37.794807    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:37.794820    5918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:32:37.849973    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:32:37.850001    5918 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:32:37.850015    5918 buildroot.go:174] setting up certificates
	I0425 12:32:37.850025    5918 provision.go:84] configureAuth start
	I0425 12:32:37.850035    5918 main.go:141] libmachine: (multinode-034000) Calling .GetMachineName
	I0425 12:32:37.850166    5918 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:32:37.850275    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:37.850368    5918 provision.go:143] copyHostCerts
	I0425 12:32:37.850400    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:32:37.850471    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:32:37.850479    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:32:37.850635    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:32:37.850835    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:32:37.850874    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:32:37.850879    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:32:37.850966    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:32:37.851112    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:32:37.851150    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:32:37.851154    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:32:37.851243    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:32:37.851389    5918 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-034000]
	I0425 12:32:37.979713    5918 provision.go:177] copyRemoteCerts
	I0425 12:32:37.979845    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:32:37.979862    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:37.979991    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:37.980085    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:37.980184    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:37.980279    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:32:38.012830    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:32:38.012913    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0425 12:32:38.032777    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:32:38.032839    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 12:32:38.052456    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:32:38.052521    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:32:38.071937    5918 provision.go:87] duration metric: took 221.889791ms to configureAuth
	I0425 12:32:38.071949    5918 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:32:38.072114    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:32:38.072127    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:38.072258    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:38.072360    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:38.072451    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:38.072526    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:38.072614    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:38.072737    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:38.072863    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:38.072870    5918 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:32:38.124151    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:32:38.124163    5918 buildroot.go:70] root file system type: tmpfs
	I0425 12:32:38.124245    5918 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:32:38.124258    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:38.124386    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:38.124486    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:38.124577    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:38.124668    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:38.124785    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:38.124926    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:38.124971    5918 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:32:38.185657    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:32:38.185691    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:38.185825    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:38.185936    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:38.186047    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:38.186145    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:38.186277    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:38.186427    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:38.186439    5918 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:32:39.765200    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:32:39.765215    5918 machine.go:97] duration metric: took 14.674523375s to provisionDockerMachine
	I0425 12:32:39.765222    5918 start.go:293] postStartSetup for "multinode-034000" (driver="hyperkit")
	I0425 12:32:39.765229    5918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:32:39.765239    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:39.765420    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:32:39.765445    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:39.765539    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:39.765633    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:39.765742    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:39.765834    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:32:39.801383    5918 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:32:39.805745    5918 command_runner.go:130] > NAME=Buildroot
	I0425 12:32:39.805755    5918 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 12:32:39.805759    5918 command_runner.go:130] > ID=buildroot
	I0425 12:32:39.805762    5918 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 12:32:39.805766    5918 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 12:32:39.805946    5918 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:32:39.805960    5918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:32:39.806066    5918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:32:39.806267    5918 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:32:39.806274    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:32:39.806479    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:32:39.815774    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:32:39.847626    5918 start.go:296] duration metric: took 82.392241ms for postStartSetup
	I0425 12:32:39.847651    5918 fix.go:56] duration metric: took 15.003599544s for fixHost
	I0425 12:32:39.847664    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:39.847804    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:39.847907    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:39.848006    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:39.848102    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:39.848208    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:32:39.848340    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0425 12:32:39.848347    5918 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 12:32:39.898718    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073560.010169234
	
	I0425 12:32:39.898730    5918 fix.go:216] guest clock: 1714073560.010169234
	I0425 12:32:39.898735    5918 fix.go:229] Guest: 2024-04-25 12:32:40.010169234 -0700 PDT Remote: 2024-04-25 12:32:39.847654 -0700 PDT m=+15.456928236 (delta=162.515234ms)
	I0425 12:32:39.898753    5918 fix.go:200] guest clock delta is within tolerance: 162.515234ms
	I0425 12:32:39.898757    5918 start.go:83] releasing machines lock for "multinode-034000", held for 15.054722463s
	I0425 12:32:39.898779    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:39.898939    5918 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:32:39.899042    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:39.899395    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:39.899531    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:32:39.899642    5918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:32:39.899683    5918 ssh_runner.go:195] Run: cat /version.json
	I0425 12:32:39.899691    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:39.899698    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:32:39.899804    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:39.899831    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:32:39.899899    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:39.899943    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:32:39.899996    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:39.900054    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:32:39.900076    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:32:39.900134    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:32:39.926465    5918 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0425 12:32:39.926666    5918 ssh_runner.go:195] Run: systemctl --version
	I0425 12:32:40.068870    5918 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 12:32:40.069782    5918 command_runner.go:130] > systemd 252 (252)
	I0425 12:32:40.069827    5918 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0425 12:32:40.069947    5918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 12:32:40.075204    5918 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 12:32:40.075228    5918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:32:40.075268    5918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:32:40.088664    5918 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0425 12:32:40.088687    5918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:32:40.088693    5918 start.go:494] detecting cgroup driver to use...
	I0425 12:32:40.088798    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:32:40.103382    5918 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0425 12:32:40.103775    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:32:40.112533    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:32:40.121432    5918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:32:40.121478    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:32:40.130472    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:32:40.139503    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:32:40.148370    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:32:40.157184    5918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:32:40.166432    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:32:40.175528    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:32:40.184508    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:32:40.193335    5918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:32:40.201191    5918 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 12:32:40.201333    5918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:32:40.209306    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:32:40.302049    5918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:32:40.316994    5918 start.go:494] detecting cgroup driver to use...
	I0425 12:32:40.317082    5918 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:32:40.327473    5918 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0425 12:32:40.327970    5918 command_runner.go:130] > [Unit]
	I0425 12:32:40.327979    5918 command_runner.go:130] > Description=Docker Application Container Engine
	I0425 12:32:40.327983    5918 command_runner.go:130] > Documentation=https://docs.docker.com
	I0425 12:32:40.327988    5918 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0425 12:32:40.327992    5918 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0425 12:32:40.327996    5918 command_runner.go:130] > StartLimitBurst=3
	I0425 12:32:40.328000    5918 command_runner.go:130] > StartLimitIntervalSec=60
	I0425 12:32:40.328003    5918 command_runner.go:130] > [Service]
	I0425 12:32:40.328006    5918 command_runner.go:130] > Type=notify
	I0425 12:32:40.328010    5918 command_runner.go:130] > Restart=on-failure
	I0425 12:32:40.328027    5918 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0425 12:32:40.328042    5918 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0425 12:32:40.328049    5918 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0425 12:32:40.328054    5918 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0425 12:32:40.328060    5918 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0425 12:32:40.328071    5918 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0425 12:32:40.328079    5918 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0425 12:32:40.328088    5918 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0425 12:32:40.328093    5918 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0425 12:32:40.328104    5918 command_runner.go:130] > ExecStart=
	I0425 12:32:40.328117    5918 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0425 12:32:40.328122    5918 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0425 12:32:40.328129    5918 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0425 12:32:40.328139    5918 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0425 12:32:40.328142    5918 command_runner.go:130] > LimitNOFILE=infinity
	I0425 12:32:40.328146    5918 command_runner.go:130] > LimitNPROC=infinity
	I0425 12:32:40.328151    5918 command_runner.go:130] > LimitCORE=infinity
	I0425 12:32:40.328159    5918 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0425 12:32:40.328166    5918 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0425 12:32:40.328171    5918 command_runner.go:130] > TasksMax=infinity
	I0425 12:32:40.328180    5918 command_runner.go:130] > TimeoutStartSec=0
	I0425 12:32:40.328188    5918 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0425 12:32:40.328192    5918 command_runner.go:130] > Delegate=yes
	I0425 12:32:40.328198    5918 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0425 12:32:40.328202    5918 command_runner.go:130] > KillMode=process
	I0425 12:32:40.328205    5918 command_runner.go:130] > [Install]
	I0425 12:32:40.328215    5918 command_runner.go:130] > WantedBy=multi-user.target
	I0425 12:32:40.328298    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:32:40.340825    5918 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:32:40.355475    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:32:40.366605    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:32:40.377349    5918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:32:40.400322    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:32:40.411078    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:32:40.425690    5918 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0425 12:32:40.425905    5918 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:32:40.428824    5918 command_runner.go:130] > /usr/bin/cri-dockerd
	I0425 12:32:40.428927    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:32:40.436244    5918 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:32:40.450060    5918 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:32:40.546140    5918 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:32:40.652998    5918 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:32:40.653085    5918 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:32:40.667179    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:32:40.757148    5918 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:32:43.039948    5918 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.282708602s)
	I0425 12:32:43.040003    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0425 12:32:43.051325    5918 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0425 12:32:43.064044    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:32:43.075686    5918 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0425 12:32:43.178056    5918 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0425 12:32:43.287637    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:32:43.405468    5918 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0425 12:32:43.419145    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:32:43.430207    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:32:43.535081    5918 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0425 12:32:43.594002    5918 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0425 12:32:43.594076    5918 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0425 12:32:43.598258    5918 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0425 12:32:43.598269    5918 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 12:32:43.598273    5918 command_runner.go:130] > Device: 0,22	Inode: 745         Links: 1
	I0425 12:32:43.598278    5918 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0425 12:32:43.598282    5918 command_runner.go:130] > Access: 2024-04-25 19:32:43.659818747 +0000
	I0425 12:32:43.598287    5918 command_runner.go:130] > Modify: 2024-04-25 19:32:43.659818747 +0000
	I0425 12:32:43.598292    5918 command_runner.go:130] > Change: 2024-04-25 19:32:43.661818377 +0000
	I0425 12:32:43.598295    5918 command_runner.go:130] >  Birth: -
	I0425 12:32:43.598383    5918 start.go:562] Will wait 60s for crictl version
	I0425 12:32:43.598430    5918 ssh_runner.go:195] Run: which crictl
	I0425 12:32:43.601184    5918 command_runner.go:130] > /usr/bin/crictl
	I0425 12:32:43.601300    5918 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 12:32:43.629863    5918 command_runner.go:130] > Version:  0.1.0
	I0425 12:32:43.629877    5918 command_runner.go:130] > RuntimeName:  docker
	I0425 12:32:43.629881    5918 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0425 12:32:43.629892    5918 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 12:32:43.630801    5918 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0425 12:32:43.630869    5918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:32:43.646774    5918 command_runner.go:130] > 26.0.2
	I0425 12:32:43.647526    5918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:32:43.663237    5918 command_runner.go:130] > 26.0.2
	I0425 12:32:43.706821    5918 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0425 12:32:43.706846    5918 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:32:43.707037    5918 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0425 12:32:43.710484    5918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:32:43.720739    5918 kubeadm.go:877] updating cluster {Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 12:32:43.720837    5918 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:32:43.720898    5918 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0425 12:32:43.732927    5918 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0425 12:32:43.732940    5918 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0425 12:32:43.732945    5918 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 12:32:43.732948    5918 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0425 12:32:43.732957    5918 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0425 12:32:43.732961    5918 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0425 12:32:43.732966    5918 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0425 12:32:43.732969    5918 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0425 12:32:43.732973    5918 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 12:32:43.732977    5918 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0425 12:32:43.733466    5918 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0425 12:32:43.733479    5918 docker.go:615] Images already preloaded, skipping extraction
	I0425 12:32:43.733549    5918 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0425 12:32:43.752055    5918 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0425 12:32:43.752087    5918 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0425 12:32:43.752092    5918 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 12:32:43.752097    5918 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0425 12:32:43.752101    5918 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0425 12:32:43.752105    5918 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0425 12:32:43.752110    5918 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0425 12:32:43.752114    5918 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0425 12:32:43.752118    5918 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 12:32:43.752123    5918 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0425 12:32:43.752717    5918 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0425 12:32:43.752732    5918 cache_images.go:84] Images are preloaded, skipping loading
	I0425 12:32:43.752744    5918 kubeadm.go:928] updating node { 192.169.0.16 8443 v1.30.0 docker true true} ...
	I0425 12:32:43.752820    5918 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-034000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 12:32:43.752887    5918 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0425 12:32:43.768754    5918 command_runner.go:130] > cgroupfs
	I0425 12:32:43.769330    5918 cni.go:84] Creating CNI manager for ""
	I0425 12:32:43.769339    5918 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 12:32:43.769350    5918 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 12:32:43.769374    5918 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.16 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-034000 NodeName:multinode-034000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 12:32:43.769465    5918 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-034000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 12:32:43.769522    5918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 12:32:43.777755    5918 command_runner.go:130] > kubeadm
	I0425 12:32:43.777772    5918 command_runner.go:130] > kubectl
	I0425 12:32:43.777777    5918 command_runner.go:130] > kubelet
	I0425 12:32:43.777864    5918 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 12:32:43.777905    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 12:32:43.785731    5918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0425 12:32:43.799054    5918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 12:32:43.812438    5918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0425 12:32:43.826016    5918 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0425 12:32:43.828861    5918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:32:43.838704    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:32:43.936475    5918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:32:43.949782    5918 certs.go:68] Setting up /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000 for IP: 192.169.0.16
	I0425 12:32:43.949794    5918 certs.go:194] generating shared ca certs ...
	I0425 12:32:43.949806    5918 certs.go:226] acquiring lock for ca certs: {Name:mk1f3cabc8bfb1fa57eb09572b98c6852173235a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:32:43.950003    5918 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key
	I0425 12:32:43.950080    5918 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key
	I0425 12:32:43.950091    5918 certs.go:256] generating profile certs ...
	I0425 12:32:43.950199    5918 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key
	I0425 12:32:43.950275    5918 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key.b4ea36e9
	I0425 12:32:43.950346    5918 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key
	I0425 12:32:43.950353    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 12:32:43.950373    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 12:32:43.950392    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 12:32:43.950410    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 12:32:43.950434    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 12:32:43.950466    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 12:32:43.950495    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 12:32:43.950514    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 12:32:43.950607    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem (1338 bytes)
	W0425 12:32:43.950654    5918 certs.go:480] ignoring /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885_empty.pem, impossibly tiny 0 bytes
	I0425 12:32:43.950663    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 12:32:43.950692    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem (1078 bytes)
	I0425 12:32:43.950722    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem (1123 bytes)
	I0425 12:32:43.950753    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem (1675 bytes)
	I0425 12:32:43.950824    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:32:43.950860    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:32:43.950884    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem -> /usr/share/ca-certificates/1885.pem
	I0425 12:32:43.950902    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /usr/share/ca-certificates/18852.pem
	I0425 12:32:43.951330    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 12:32:43.984509    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 12:32:44.004164    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 12:32:44.023298    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 12:32:44.042612    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 12:32:44.065147    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 12:32:44.084539    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 12:32:44.104053    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 12:32:44.123986    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 12:32:44.143621    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem --> /usr/share/ca-certificates/1885.pem (1338 bytes)
	I0425 12:32:44.163307    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /usr/share/ca-certificates/18852.pem (1708 bytes)
	I0425 12:32:44.182642    5918 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 12:32:44.196021    5918 ssh_runner.go:195] Run: openssl version
	I0425 12:32:44.199981    5918 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 12:32:44.200160    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 12:32:44.209346    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:32:44.212625    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:32:44.212728    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:32:44.212764    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:32:44.216988    5918 command_runner.go:130] > b5213941
	I0425 12:32:44.217083    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 12:32:44.225463    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1885.pem && ln -fs /usr/share/ca-certificates/1885.pem /etc/ssl/certs/1885.pem"
	I0425 12:32:44.233887    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1885.pem
	I0425 12:32:44.237220    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:32:44.237265    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:32:44.237301    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1885.pem
	I0425 12:32:44.241322    5918 command_runner.go:130] > 51391683
	I0425 12:32:44.241562    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1885.pem /etc/ssl/certs/51391683.0"
	I0425 12:32:44.249908    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18852.pem && ln -fs /usr/share/ca-certificates/18852.pem /etc/ssl/certs/18852.pem"
	I0425 12:32:44.258157    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18852.pem
	I0425 12:32:44.261521    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:32:44.261603    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:32:44.261639    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18852.pem
	I0425 12:32:44.265689    5918 command_runner.go:130] > 3ec20f2e
	I0425 12:32:44.265857    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18852.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 12:32:44.274296    5918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 12:32:44.277590    5918 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 12:32:44.277600    5918 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0425 12:32:44.277605    5918 command_runner.go:130] > Device: 253,1	Inode: 7337288     Links: 1
	I0425 12:32:44.277610    5918 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 12:32:44.277623    5918 command_runner.go:130] > Access: 2024-04-25 19:23:54.111940017 +0000
	I0425 12:32:44.277628    5918 command_runner.go:130] > Modify: 2024-04-25 19:23:54.111940017 +0000
	I0425 12:32:44.277632    5918 command_runner.go:130] > Change: 2024-04-25 19:23:54.111940017 +0000
	I0425 12:32:44.277637    5918 command_runner.go:130] >  Birth: 2024-04-25 19:23:54.111940017 +0000
	I0425 12:32:44.277681    5918 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 12:32:44.282065    5918 command_runner.go:130] > Certificate will not expire
	I0425 12:32:44.282144    5918 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 12:32:44.286396    5918 command_runner.go:130] > Certificate will not expire
	I0425 12:32:44.286476    5918 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 12:32:44.290816    5918 command_runner.go:130] > Certificate will not expire
	I0425 12:32:44.290879    5918 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 12:32:44.295096    5918 command_runner.go:130] > Certificate will not expire
	I0425 12:32:44.295194    5918 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 12:32:44.299392    5918 command_runner.go:130] > Certificate will not expire
	I0425 12:32:44.299462    5918 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 12:32:44.303637    5918 command_runner.go:130] > Certificate will not expire
	I0425 12:32:44.303765    5918 kubeadm.go:391] StartCluster: {Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:32:44.303890    5918 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0425 12:32:44.315836    5918 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 12:32:44.323533    5918 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0425 12:32:44.323549    5918 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0425 12:32:44.323556    5918 command_runner.go:130] > /var/lib/minikube/etcd:
	I0425 12:32:44.323561    5918 command_runner.go:130] > member
	W0425 12:32:44.323612    5918 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 12:32:44.323620    5918 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 12:32:44.323626    5918 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 12:32:44.323675    5918 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 12:32:44.331710    5918 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:32:44.332021    5918 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-034000" does not appear in /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:32:44.332109    5918 kubeconfig.go:62] /Users/jenkins/minikube-integration/18757-1425/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-034000" cluster setting kubeconfig missing "multinode-034000" context setting]
	I0425 12:32:44.332343    5918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/kubeconfig: {Name:mk225259838427b91a16bb598157785cd2bcef65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:32:44.333018    5918 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:32:44.333223    5918 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc47ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:32:44.333583    5918 cert_rotation.go:137] Starting client certificate rotation controller
	I0425 12:32:44.333727    5918 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 12:32:44.341204    5918 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.16
	I0425 12:32:44.341224    5918 kubeadm.go:1154] stopping kube-system containers ...
	I0425 12:32:44.341275    5918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0425 12:32:44.353392    5918 command_runner.go:130] > d1a679398f5d
	I0425 12:32:44.353407    5918 command_runner.go:130] > 5a723e5001a4
	I0425 12:32:44.353410    5918 command_runner.go:130] > 4035615d6144
	I0425 12:32:44.353413    5918 command_runner.go:130] > 6ceafd789a01
	I0425 12:32:44.353417    5918 command_runner.go:130] > 6bbf310089ed
	I0425 12:32:44.353420    5918 command_runner.go:130] > 573591286ee6
	I0425 12:32:44.353424    5918 command_runner.go:130] > 2dd0e2cf1dfa
	I0425 12:32:44.353426    5918 command_runner.go:130] > f9e2b9879728
	I0425 12:32:44.353430    5918 command_runner.go:130] > 03ce4bf6442b
	I0425 12:32:44.353436    5918 command_runner.go:130] > a3c296be16a7
	I0425 12:32:44.353439    5918 command_runner.go:130] > 5d1799046a89
	I0425 12:32:44.353442    5918 command_runner.go:130] > 691ca6c89d9a
	I0425 12:32:44.353445    5918 command_runner.go:130] > f6c4a60e9a52
	I0425 12:32:44.353448    5918 command_runner.go:130] > 605cbae9d65d
	I0425 12:32:44.353452    5918 command_runner.go:130] > 351cbbeac7ea
	I0425 12:32:44.353455    5918 command_runner.go:130] > 46ac0a1db04a
	I0425 12:32:44.353851    5918 docker.go:483] Stopping containers: [d1a679398f5d 5a723e5001a4 4035615d6144 6ceafd789a01 6bbf310089ed 573591286ee6 2dd0e2cf1dfa f9e2b9879728 03ce4bf6442b a3c296be16a7 5d1799046a89 691ca6c89d9a f6c4a60e9a52 605cbae9d65d 351cbbeac7ea 46ac0a1db04a]
	I0425 12:32:44.353922    5918 ssh_runner.go:195] Run: docker stop d1a679398f5d 5a723e5001a4 4035615d6144 6ceafd789a01 6bbf310089ed 573591286ee6 2dd0e2cf1dfa f9e2b9879728 03ce4bf6442b a3c296be16a7 5d1799046a89 691ca6c89d9a f6c4a60e9a52 605cbae9d65d 351cbbeac7ea 46ac0a1db04a
	I0425 12:32:44.366821    5918 command_runner.go:130] > d1a679398f5d
	I0425 12:32:44.366834    5918 command_runner.go:130] > 5a723e5001a4
	I0425 12:32:44.366985    5918 command_runner.go:130] > 4035615d6144
	I0425 12:32:44.366992    5918 command_runner.go:130] > 6ceafd789a01
	I0425 12:32:44.366997    5918 command_runner.go:130] > 6bbf310089ed
	I0425 12:32:44.367000    5918 command_runner.go:130] > 573591286ee6
	I0425 12:32:44.367003    5918 command_runner.go:130] > 2dd0e2cf1dfa
	I0425 12:32:44.367007    5918 command_runner.go:130] > f9e2b9879728
	I0425 12:32:44.367010    5918 command_runner.go:130] > 03ce4bf6442b
	I0425 12:32:44.367014    5918 command_runner.go:130] > a3c296be16a7
	I0425 12:32:44.367017    5918 command_runner.go:130] > 5d1799046a89
	I0425 12:32:44.367020    5918 command_runner.go:130] > 691ca6c89d9a
	I0425 12:32:44.367023    5918 command_runner.go:130] > f6c4a60e9a52
	I0425 12:32:44.367026    5918 command_runner.go:130] > 605cbae9d65d
	I0425 12:32:44.367029    5918 command_runner.go:130] > 351cbbeac7ea
	I0425 12:32:44.367032    5918 command_runner.go:130] > 46ac0a1db04a
	I0425 12:32:44.367638    5918 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 12:32:44.380146    5918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 12:32:44.387336    5918 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0425 12:32:44.387346    5918 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0425 12:32:44.387351    5918 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0425 12:32:44.387357    5918 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 12:32:44.387371    5918 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 12:32:44.387380    5918 kubeadm.go:156] found existing configuration files:
	
	I0425 12:32:44.387416    5918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 12:32:44.394388    5918 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 12:32:44.394407    5918 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 12:32:44.394443    5918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 12:32:44.401581    5918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 12:32:44.408643    5918 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 12:32:44.408660    5918 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 12:32:44.408698    5918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 12:32:44.415901    5918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 12:32:44.422821    5918 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 12:32:44.422839    5918 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 12:32:44.422874    5918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 12:32:44.430046    5918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 12:32:44.436903    5918 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 12:32:44.454576    5918 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 12:32:44.454678    5918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 12:32:44.463863    5918 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 12:32:44.471221    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 12:32:44.544587    5918 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 12:32:44.544721    5918 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0425 12:32:44.544892    5918 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0425 12:32:44.545064    5918 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 12:32:44.545310    5918 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0425 12:32:44.545468    5918 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0425 12:32:44.545760    5918 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0425 12:32:44.545942    5918 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0425 12:32:44.546113    5918 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0425 12:32:44.546329    5918 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 12:32:44.546475    5918 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 12:32:44.546636    5918 command_runner.go:130] > [certs] Using the existing "sa" key
	I0425 12:32:44.547548    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 12:32:44.583228    5918 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 12:32:44.829372    5918 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 12:32:44.889878    5918 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 12:32:45.035081    5918 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 12:32:45.146296    5918 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 12:32:45.294556    5918 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 12:32:45.296523    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 12:32:45.348724    5918 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 12:32:45.349617    5918 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 12:32:45.349664    5918 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0425 12:32:45.452316    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 12:32:45.512020    5918 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 12:32:45.512034    5918 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 12:32:45.514378    5918 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 12:32:45.514482    5918 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 12:32:45.516053    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 12:32:45.583019    5918 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 12:32:45.586047    5918 api_server.go:52] waiting for apiserver process to appear ...
	I0425 12:32:45.586118    5918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:32:46.086215    5918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:32:46.586255    5918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:32:46.600339    5918 command_runner.go:130] > 1675
	I0425 12:32:46.600384    5918 api_server.go:72] duration metric: took 1.014315659s to wait for apiserver process to appear ...
	I0425 12:32:46.600391    5918 api_server.go:88] waiting for apiserver healthz status ...
	I0425 12:32:46.600406    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:32:49.023414    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 12:32:49.023441    5918 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 12:32:49.023450    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:32:49.042648    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 12:32:49.042662    5918 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 12:32:49.102526    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:32:49.115965    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 12:32:49.115985    5918 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 12:32:49.602563    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:32:49.609200    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 12:32:49.609217    5918 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 12:32:50.100570    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:32:50.103670    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 12:32:50.103687    5918 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 12:32:50.600835    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:32:50.604083    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:32:50.604139    5918 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0425 12:32:50.604145    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:50.604152    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:50.604160    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:50.609459    5918 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 12:32:50.609467    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:50.609472    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:50.609475    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:50.609480    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:50.609482    5918 round_trippers.go:580]     Content-Length: 263
	I0425 12:32:50.609485    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:50 GMT
	I0425 12:32:50.609488    5918 round_trippers.go:580]     Audit-Id: d092d93f-336c-45d8-a222-cadf6744e3e7
	I0425 12:32:50.609491    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:50.609507    5918 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0425 12:32:50.609565    5918 api_server.go:141] control plane version: v1.30.0
	I0425 12:32:50.609574    5918 api_server.go:131] duration metric: took 4.009058779s to wait for apiserver health ...
	I0425 12:32:50.609580    5918 cni.go:84] Creating CNI manager for ""
	I0425 12:32:50.609585    5918 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 12:32:50.631937    5918 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0425 12:32:50.651810    5918 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0425 12:32:50.656358    5918 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0425 12:32:50.656369    5918 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0425 12:32:50.656374    5918 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0425 12:32:50.656379    5918 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 12:32:50.656384    5918 command_runner.go:130] > Access: 2024-04-25 19:32:34.868573156 +0000
	I0425 12:32:50.656390    5918 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0425 12:32:50.656394    5918 command_runner.go:130] > Change: 2024-04-25 19:32:32.688339054 +0000
	I0425 12:32:50.656397    5918 command_runner.go:130] >  Birth: -
	I0425 12:32:50.656612    5918 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0425 12:32:50.656621    5918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0425 12:32:50.686774    5918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0425 12:32:51.038298    5918 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0425 12:32:51.059260    5918 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0425 12:32:51.134086    5918 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0425 12:32:51.183295    5918 command_runner.go:130] > daemonset.apps/kindnet configured
	I0425 12:32:51.184865    5918 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 12:32:51.184920    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:32:51.184925    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.184933    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.184945    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.188097    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:32:51.188109    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.188116    5918 round_trippers.go:580]     Audit-Id: 1a033167-4368-4d6a-962e-b0225f9d0c5e
	I0425 12:32:51.188121    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.188124    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.188127    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.188131    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.188135    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.189761    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"845"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87462 chars]
	I0425 12:32:51.192932    5918 system_pods.go:59] 12 kube-system pods found
	I0425 12:32:51.192947    5918 system_pods.go:61] "coredns-7db6d8ff4d-w5z5l" [21ddb5bc-fcf1-4ec4-9fdb-8595d406b302] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 12:32:51.192952    5918 system_pods.go:61] "etcd-multinode-034000" [fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 12:32:51.192959    5918 system_pods.go:61] "kindnet-7ktv2" [957b7d0e-0754-481e-aa73-6772434e58e3] Running
	I0425 12:32:51.192962    5918 system_pods.go:61] "kindnet-gmxwj" [eb9b5a06-bd76-43b9-b8f9-8f5e1243769d] Running
	I0425 12:32:51.192967    5918 system_pods.go:61] "kindnet-spsv9" [fa2c70be-02ec-404a-9eb0-7862c49d8b3b] Running
	I0425 12:32:51.192970    5918 system_pods.go:61] "kube-apiserver-multinode-034000" [d142ad34-9a12-42f9-b92d-e0f968eaaa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 12:32:51.192974    5918 system_pods.go:61] "kube-controller-manager-multinode-034000" [19072fbe-3cb2-4b92-bd98-b549daec4cf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 12:32:51.192977    5918 system_pods.go:61] "kube-proxy-d8zc5" [feefb48f-5488-4adc-b7e8-47f5d92bd2f8] Running
	I0425 12:32:51.192980    5918 system_pods.go:61] "kube-proxy-gmspl" [b0f6c7c8-ef54-4c63-9de2-05e01ace3e15] Running
	I0425 12:32:51.192983    5918 system_pods.go:61] "kube-proxy-mp7qm" [cc106198-3317-44e2-b1a7-cc5eac6dcadc] Running
	I0425 12:32:51.192986    5918 system_pods.go:61] "kube-scheduler-multinode-034000" [889fb9d4-d8d9-4a92-be22-d0ab1518bc93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 12:32:51.192989    5918 system_pods.go:61] "storage-provisioner" [89c78c52-dabe-4a5b-ac3b-0209ccb11139] Running
	I0425 12:32:51.192994    5918 system_pods.go:74] duration metric: took 8.120416ms to wait for pod list to return data ...
	I0425 12:32:51.193000    5918 node_conditions.go:102] verifying NodePressure condition ...
	I0425 12:32:51.193038    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0425 12:32:51.193042    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.193048    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.193051    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.194894    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:51.194903    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.194908    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.194913    5918 round_trippers.go:580]     Audit-Id: 52a9005d-b71b-4e53-9024-858f11bf1be3
	I0425 12:32:51.194917    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.194920    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.194922    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.194925    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.195133    5918 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"845"},"items":[{"metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15059 chars]
	I0425 12:32:51.195684    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:32:51.195697    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:32:51.195706    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:32:51.195710    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:32:51.195714    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:32:51.195717    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:32:51.195720    5918 node_conditions.go:105] duration metric: took 2.717472ms to run NodePressure ...
	I0425 12:32:51.195730    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 12:32:51.291721    5918 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0425 12:32:51.444440    5918 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0425 12:32:51.445416    5918 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 12:32:51.445471    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0425 12:32:51.445476    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.445482    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.445485    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.447305    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:51.447314    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.447319    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.447322    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.447325    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.447337    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.447341    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.447345    5918 round_trippers.go:580]     Audit-Id: 87fe682d-15c7-4fe1-b838-8eb88ff673fb
	I0425 12:32:51.447677    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"848"},"items":[{"metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0425 12:32:51.448423    5918 kubeadm.go:733] kubelet initialised
	I0425 12:32:51.448432    5918 kubeadm.go:734] duration metric: took 3.007349ms waiting for restarted kubelet to initialise ...
	I0425 12:32:51.448439    5918 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:32:51.448467    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:32:51.448472    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.448478    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.448483    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.450515    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:51.450527    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.450535    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.450540    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.450544    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.450548    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.450551    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.450555    5918 round_trippers.go:580]     Audit-Id: 6f64b0f7-3334-48fd-ad7e-600ae7235238
	I0425 12:32:51.451643    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"848"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87462 chars]
	I0425 12:32:51.453549    5918 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:51.453592    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:51.453597    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.453603    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.453607    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.454872    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:51.454880    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.454885    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.454888    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.454892    5918 round_trippers.go:580]     Audit-Id: 9edf35e7-fafc-4e7c-9a41-4b9fb2aa4f03
	I0425 12:32:51.454896    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.454900    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.454904    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.455035    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:51.455289    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:51.455296    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.455302    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.455306    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.456329    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:51.456340    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.456347    5918 round_trippers.go:580]     Audit-Id: 5ebec3cb-e75b-4905-b9e8-870f5f2214c5
	I0425 12:32:51.456352    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.456356    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.456361    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.456365    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.456369    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.456515    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:51.456725    5918 pod_ready.go:97] node "multinode-034000" hosting pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.456734    5918 pod_ready.go:81] duration metric: took 3.174776ms for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:51.456739    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000" hosting pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.456747    5918 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:51.456779    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:32:51.456784    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.456790    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.456798    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.457666    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:32:51.457672    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.457676    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.457684    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.457688    5918 round_trippers.go:580]     Audit-Id: bfbcfd84-72a8-4ce4-8fe4-b34c7c02b2b9
	I0425 12:32:51.457691    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.457694    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.457697    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.457824    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:32:51.458042    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:51.458049    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.458055    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.458059    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.459338    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:51.459348    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.459354    5918 round_trippers.go:580]     Audit-Id: 7b7489c0-5bbc-4e59-8a20-492cf7463400
	I0425 12:32:51.459358    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.459362    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.459365    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.459368    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.459370    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.459562    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:51.459732    5918 pod_ready.go:97] node "multinode-034000" hosting pod "etcd-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.459744    5918 pod_ready.go:81] duration metric: took 2.991986ms for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:51.459750    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000" hosting pod "etcd-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.459761    5918 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:51.459798    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-034000
	I0425 12:32:51.459803    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.459808    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.459812    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.460778    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:32:51.460786    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.460790    5918 round_trippers.go:580]     Audit-Id: 50edc43d-7ad3-4b00-85e7-5921a7af03de
	I0425 12:32:51.460793    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.460796    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.460799    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.460802    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.460806    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.460924    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-034000","namespace":"kube-system","uid":"d142ad34-9a12-42f9-b92d-e0f968eaaa14","resourceVersion":"818","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.mirror":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.seen":"2024-04-25T19:24:03.349967563Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0425 12:32:51.461167    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:51.461173    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.461178    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.461181    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.462008    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:32:51.462015    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.462020    5918 round_trippers.go:580]     Audit-Id: b555025e-3adb-4bdc-87f1-eb7726bfc4b6
	I0425 12:32:51.462025    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.462029    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.462034    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.462038    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.462042    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.462158    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:51.462333    5918 pod_ready.go:97] node "multinode-034000" hosting pod "kube-apiserver-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.462342    5918 pod_ready.go:81] duration metric: took 2.574921ms for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:51.462348    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000" hosting pod "kube-apiserver-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.462352    5918 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:51.462378    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-034000
	I0425 12:32:51.462383    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.462388    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.462393    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.463309    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:32:51.463316    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.463323    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.463327    5918 round_trippers.go:580]     Audit-Id: 76fe5be9-cf11-4c45-91b0-e2c058974ff0
	I0425 12:32:51.463332    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.463336    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.463339    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.463343    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.463471    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-034000","namespace":"kube-system","uid":"19072fbe-3cb2-4b92-bd98-b549daec4cf2","resourceVersion":"819","creationTimestamp":"2024-04-25T19:24:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.mirror":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.seen":"2024-04-25T19:23:58.495195502Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0425 12:32:51.585018    5918 request.go:629] Waited for 121.306595ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:51.585082    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:51.585088    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.585094    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.585097    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.586451    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:51.586462    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.586467    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.586471    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.586474    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.586477    5918 round_trippers.go:580]     Audit-Id: fdd37caa-0fd0-42df-8132-4117c46049f4
	I0425 12:32:51.586481    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.586484    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.586552    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:51.586753    5918 pod_ready.go:97] node "multinode-034000" hosting pod "kube-controller-manager-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.586766    5918 pod_ready.go:81] duration metric: took 124.404985ms for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:51.586773    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000" hosting pod "kube-controller-manager-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:51.586778    5918 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:51.785699    5918 request.go:629] Waited for 198.821108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:32:51.785788    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:32:51.785799    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.785810    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.785816    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.788154    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:51.788165    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.788171    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.788174    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.788176    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.788180    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.788182    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:51 GMT
	I0425 12:32:51.788185    5918 round_trippers.go:580]     Audit-Id: 05285501-2579-422c-8c2b-228a3ad8076a
	I0425 12:32:51.788296    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d8zc5","generateName":"kube-proxy-","namespace":"kube-system","uid":"feefb48f-5488-4adc-b7e8-47f5d92bd2f8","resourceVersion":"667","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0425 12:32:51.987022    5918 request.go:629] Waited for 198.361007ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:32:51.987081    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:32:51.987116    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:51.987132    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:51.987165    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:51.989570    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:51.989585    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:51.989592    5918 round_trippers.go:580]     Audit-Id: 05795241-2291-4bc7-bd31-088b769a6916
	I0425 12:32:51.989597    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:51.989602    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:51.989605    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:51.989613    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:51.989617    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:52 GMT
	I0425 12:32:51.989782    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"a08f7c72-c78c-42d9-aa96-d065a8c730b6","resourceVersion":"679","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_25_33_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3898 chars]
	I0425 12:32:51.990020    5918 pod_ready.go:97] node "multinode-034000-m03" hosting pod "kube-proxy-d8zc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000-m03" has status "Ready":"Unknown"
	I0425 12:32:51.990034    5918 pod_ready.go:81] duration metric: took 403.237936ms for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:51.990045    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000-m03" hosting pod "kube-proxy-d8zc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000-m03" has status "Ready":"Unknown"
	I0425 12:32:51.990051    5918 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:52.185031    5918 request.go:629] Waited for 194.913246ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:32:52.185081    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:32:52.185091    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:52.185125    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:52.185133    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:52.187762    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:52.187778    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:52.187785    5918 round_trippers.go:580]     Audit-Id: cf9c3ab1-8442-4c90-8df6-3493f3be9537
	I0425 12:32:52.187790    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:52.187793    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:52.187814    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:52.187819    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:52.187823    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:52 GMT
	I0425 12:32:52.187953    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmspl","generateName":"kube-proxy-","namespace":"kube-system","uid":"b0f6c7c8-ef54-4c63-9de2-05e01ace3e15","resourceVersion":"842","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0425 12:32:52.385354    5918 request.go:629] Waited for 197.0113ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:52.385455    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:52.385463    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:52.385477    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:52.385485    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:52.388181    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:52.388197    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:52.388204    5918 round_trippers.go:580]     Audit-Id: e8c8de87-b5cd-4e3c-af36-0c01dd821167
	I0425 12:32:52.388208    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:52.388212    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:52.388215    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:52.388219    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:52.388223    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:52 GMT
	I0425 12:32:52.388347    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:52.388611    5918 pod_ready.go:97] node "multinode-034000" hosting pod "kube-proxy-gmspl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:52.388625    5918 pod_ready.go:81] duration metric: took 398.55506ms for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:52.388633    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000" hosting pod "kube-proxy-gmspl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:52.388641    5918 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:52.585681    5918 request.go:629] Waited for 196.990118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:32:52.585749    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:32:52.585761    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:52.585771    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:52.585779    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:52.588336    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:52.588351    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:52.588359    5918 round_trippers.go:580]     Audit-Id: d225a2cb-d68b-4632-b285-d7fb013cccee
	I0425 12:32:52.588364    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:52.588369    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:52.588372    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:52.588375    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:52.588378    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:52 GMT
	I0425 12:32:52.588454    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mp7qm","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc106198-3317-44e2-b1a7-cc5eac6dcadc","resourceVersion":"479","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0425 12:32:52.785033    5918 request.go:629] Waited for 196.242617ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:32:52.785136    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:32:52.785147    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:52.785161    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:52.785168    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:52.787934    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:52.787951    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:52.787958    5918 round_trippers.go:580]     Audit-Id: 8c355add-6822-4d84-8e21-9ec8037e0cab
	I0425 12:32:52.787963    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:52.787968    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:52.787972    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:52.787977    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:52.787981    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:52 GMT
	I0425 12:32:52.788073    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"545","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0425 12:32:52.788292    5918 pod_ready.go:92] pod "kube-proxy-mp7qm" in "kube-system" namespace has status "Ready":"True"
	I0425 12:32:52.788304    5918 pod_ready.go:81] duration metric: took 399.64579ms for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:52.788312    5918 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:52.985073    5918 request.go:629] Waited for 196.686495ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:32:52.985188    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:32:52.985196    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:52.985205    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:52.985210    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:52.986868    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:52.986881    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:52.986886    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:52.986889    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:52.986893    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:52.986895    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:53 GMT
	I0425 12:32:52.986898    5918 round_trippers.go:580]     Audit-Id: 43008b59-10ac-4ada-8092-5bbcb93f25ea
	I0425 12:32:52.986901    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:52.986975    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-034000","namespace":"kube-system","uid":"889fb9d4-d8d9-4a92-be22-d0ab1518bc93","resourceVersion":"817","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.mirror":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.seen":"2024-04-25T19:24:03.349969029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0425 12:32:53.185047    5918 request.go:629] Waited for 197.813128ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:53.185180    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:53.185195    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:53.185207    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:53.185214    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:53.187968    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:53.187983    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:53.187990    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:53.187994    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:53.187998    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:53.188004    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:53.188010    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:53 GMT
	I0425 12:32:53.188015    5918 round_trippers.go:580]     Audit-Id: db2945fc-8091-47be-8e24-c1413e9a2057
	I0425 12:32:53.188336    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:53.188608    5918 pod_ready.go:97] node "multinode-034000" hosting pod "kube-scheduler-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:53.188622    5918 pod_ready.go:81] duration metric: took 400.291779ms for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	E0425 12:32:53.188631    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000" hosting pod "kube-scheduler-multinode-034000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000" has status "Ready":"False"
	I0425 12:32:53.188638    5918 pod_ready.go:38] duration metric: took 1.740139968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:32:53.188651    5918 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 12:32:53.198316    5918 command_runner.go:130] > -16
	I0425 12:32:53.198587    5918 ops.go:34] apiserver oom_adj: -16
	I0425 12:32:53.198594    5918 kubeadm.go:591] duration metric: took 8.874697968s to restartPrimaryControlPlane
	I0425 12:32:53.198599    5918 kubeadm.go:393] duration metric: took 8.89457405s to StartCluster
	I0425 12:32:53.198608    5918 settings.go:142] acquiring lock: {Name:mk8a221f9e3ce6c550df0488a0a92b106f308663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:32:53.198696    5918 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:32:53.199154    5918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/kubeconfig: {Name:mk225259838427b91a16bb598157785cd2bcef65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:32:53.200212    5918 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0425 12:32:53.223671    5918 out.go:177] * Verifying Kubernetes components...
	I0425 12:32:53.200241    5918 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 12:32:53.200366    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:32:53.243622    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:32:53.280401    5918 out.go:177] * Enabled addons: 
	I0425 12:32:53.338468    5918 addons.go:505] duration metric: took 138.232565ms for enable addons: enabled=[]
	I0425 12:32:53.404100    5918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:32:53.419525    5918 node_ready.go:35] waiting up to 6m0s for node "multinode-034000" to be "Ready" ...
	I0425 12:32:53.419580    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:53.419586    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:53.419592    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:53.419595    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:53.422708    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:32:53.422730    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:53.422735    5918 round_trippers.go:580]     Audit-Id: c3be2f26-3831-49a8-81d4-f8113c84680c
	I0425 12:32:53.422738    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:53.422741    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:53.422743    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:53.422747    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:53.422752    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:53 GMT
	I0425 12:32:53.422974    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"774","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0425 12:32:53.920376    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:53.920395    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:53.920404    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:53.920409    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:53.922225    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:53.922237    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:53.922244    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:53.922249    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:53.922253    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:54 GMT
	I0425 12:32:53.922257    5918 round_trippers.go:580]     Audit-Id: 33bd6997-0816-417c-bd84-cf64ed532781
	I0425 12:32:53.922261    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:53.922264    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:53.922330    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:53.922506    5918 node_ready.go:49] node "multinode-034000" has status "Ready":"True"
	I0425 12:32:53.922518    5918 node_ready.go:38] duration metric: took 502.959174ms for node "multinode-034000" to be "Ready" ...
	I0425 12:32:53.922524    5918 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:32:53.922552    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:32:53.922557    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:53.922562    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:53.922565    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:53.924709    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:53.924719    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:53.924726    5918 round_trippers.go:580]     Audit-Id: 1a4d3d25-f8bf-44f2-99ad-9253e9d718b8
	I0425 12:32:53.924730    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:53.924734    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:53.924747    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:53.924754    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:53.924758    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:54 GMT
	I0425 12:32:53.925621    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87462 chars]
	I0425 12:32:53.927609    5918 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:53.985653    5918 request.go:629] Waited for 57.982936ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:53.985692    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:53.985697    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:53.985705    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:53.985737    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:53.987526    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:53.987559    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:53.987569    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:53.987573    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:53.987577    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:53.987581    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:53.987583    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:54 GMT
	I0425 12:32:53.987586    5918 round_trippers.go:580]     Audit-Id: 700d981e-c056-47e2-9655-b0c1a1875cf5
	I0425 12:32:53.987689    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:54.185170    5918 request.go:629] Waited for 197.150268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:54.185219    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:54.185225    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:54.185231    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:54.185236    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:54.186964    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:54.186975    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:54.186980    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:54.186983    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:54.186991    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:54.186994    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:54.186998    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:54 GMT
	I0425 12:32:54.187001    5918 round_trippers.go:580]     Audit-Id: f43931d6-2cf4-4f37-a74e-8eeaeb867b6f
	I0425 12:32:54.187233    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:54.428237    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:54.428261    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:54.428273    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:54.428281    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:54.430962    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:54.430978    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:54.430985    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:54 GMT
	I0425 12:32:54.430991    5918 round_trippers.go:580]     Audit-Id: 0af1d17b-6ca7-479b-a2f3-2ecc7b657e51
	I0425 12:32:54.430994    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:54.430998    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:54.431001    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:54.431004    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:54.431094    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:54.585102    5918 request.go:629] Waited for 153.633127ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:54.585179    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:54.585183    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:54.585189    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:54.585193    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:54.586717    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:54.586727    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:54.586732    5918 round_trippers.go:580]     Audit-Id: c265ee68-55ee-474d-a694-2a98216bfca9
	I0425 12:32:54.586736    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:54.586740    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:54.586746    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:54.586750    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:54.586762    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:54 GMT
	I0425 12:32:54.587121    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:54.929828    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:54.929856    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:54.929867    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:54.929875    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:54.932683    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:54.932699    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:54.932707    5918 round_trippers.go:580]     Audit-Id: 8a7d8784-a8d2-4c99-97f8-40b3d0ae49b5
	I0425 12:32:54.932712    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:54.932717    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:54.932721    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:54.932726    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:54.932730    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:55 GMT
	I0425 12:32:54.932828    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:54.985823    5918 request.go:629] Waited for 52.60652ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:54.985886    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:54.985892    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:54.985900    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:54.985906    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:54.987662    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:54.987675    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:54.987682    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:54.987687    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:55 GMT
	I0425 12:32:54.987691    5918 round_trippers.go:580]     Audit-Id: 007bc066-aec0-42b0-a085-ad03101dab8f
	I0425 12:32:54.987695    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:54.987698    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:54.987713    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:54.988135    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:55.428657    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:55.428681    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:55.428692    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:55.428698    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:55.431302    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:55.431316    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:55.431323    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:55.431328    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:55.431332    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:55 GMT
	I0425 12:32:55.431334    5918 round_trippers.go:580]     Audit-Id: 5cc0a0c4-73b4-46e5-8630-a07443a44995
	I0425 12:32:55.431338    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:55.431341    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:55.431443    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:55.431811    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:55.431820    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:55.431830    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:55.431834    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:55.433016    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:55.433028    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:55.433036    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:55.433041    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:55.433045    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:55.433047    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:55 GMT
	I0425 12:32:55.433051    5918 round_trippers.go:580]     Audit-Id: d97b0ac5-02df-45cd-9f41-f53e4412d0c9
	I0425 12:32:55.433054    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:55.433172    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:55.928148    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:55.928165    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:55.928174    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:55.928177    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:55.930570    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:55.930582    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:55.930588    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:55.930591    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:55.930595    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:55.930597    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:55.930600    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:56 GMT
	I0425 12:32:55.930603    5918 round_trippers.go:580]     Audit-Id: 7ed30278-6956-4955-a82d-10318e961d04
	I0425 12:32:55.930862    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:55.931159    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:55.931172    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:55.931178    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:55.931183    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:55.932256    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:55.932268    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:55.932276    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:55.932280    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:55.932283    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:56 GMT
	I0425 12:32:55.932286    5918 round_trippers.go:580]     Audit-Id: 3a802f4c-3ff6-4570-9e81-387f477d4d42
	I0425 12:32:55.932290    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:55.932306    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:55.932525    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:55.932692    5918 pod_ready.go:102] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"False"
	I0425 12:32:56.429991    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:56.430012    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:56.430023    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:56.430029    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:56.432645    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:56.432659    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:56.432665    5918 round_trippers.go:580]     Audit-Id: 51a43be1-e8db-4f2a-8faf-c437a9eadcbc
	I0425 12:32:56.432670    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:56.432674    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:56.432678    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:56.432681    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:56.432685    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:56 GMT
	I0425 12:32:56.432788    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:56.433156    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:56.433166    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:56.433174    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:56.433179    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:56.434636    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:56.434645    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:56.434650    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:56.434654    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:56.434658    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:56 GMT
	I0425 12:32:56.434662    5918 round_trippers.go:580]     Audit-Id: 334ad441-faf1-4822-b8d9-b7ba4b7d07a0
	I0425 12:32:56.434678    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:56.434682    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:56.434751    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:56.927969    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:56.927989    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:56.927998    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:56.928003    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:56.930114    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:56.930133    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:56.930140    5918 round_trippers.go:580]     Audit-Id: ff11d302-ab14-45a8-a020-0bacc6fabcc1
	I0425 12:32:56.930145    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:56.930150    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:56.930158    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:56.930173    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:56.930178    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:57 GMT
	I0425 12:32:56.930267    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:56.930542    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:56.930549    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:56.930554    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:56.930556    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:56.931636    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:56.931644    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:56.931649    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:57 GMT
	I0425 12:32:56.931653    5918 round_trippers.go:580]     Audit-Id: 131af89e-1c29-4add-969c-3a831e7add61
	I0425 12:32:56.931655    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:56.931657    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:56.931661    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:56.931663    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:56.931732    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:57.429869    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:57.429882    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:57.429889    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:57.429892    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:57.431676    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:57.431686    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:57.431692    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:57 GMT
	I0425 12:32:57.431695    5918 round_trippers.go:580]     Audit-Id: 16c26091-b00f-4351-a205-911d1c413200
	I0425 12:32:57.431698    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:57.431714    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:57.431720    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:57.431722    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:57.431831    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"816","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0425 12:32:57.432130    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:57.432137    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:57.432143    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:57.432159    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:57.433540    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:57.433547    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:57.433553    5918 round_trippers.go:580]     Audit-Id: 17ef14aa-1109-4eea-8431-f69eaf8689bb
	I0425 12:32:57.433585    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:57.433589    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:57.433592    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:57.433595    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:57.433598    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:57 GMT
	I0425 12:32:57.433853    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:57.929238    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:57.929262    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:57.929274    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:57.929290    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:57.932148    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:57.932167    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:57.932174    5918 round_trippers.go:580]     Audit-Id: 1687ded3-41bb-49fe-a0c4-61bb586adc76
	I0425 12:32:57.932178    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:57.932183    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:57.932190    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:57.932195    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:57.932200    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:58 GMT
	I0425 12:32:57.932441    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"867","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7012 chars]
	I0425 12:32:57.932874    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:57.932884    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:57.932892    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:57.932898    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:57.934379    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:57.934388    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:57.934392    5918 round_trippers.go:580]     Audit-Id: 7623a635-da97-48e3-a022-dbb9fd8818a7
	I0425 12:32:57.934395    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:57.934399    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:57.934401    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:57.934404    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:57.934407    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:58 GMT
	I0425 12:32:57.934511    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:57.934700    5918 pod_ready.go:102] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"False"
	I0425 12:32:58.428082    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:58.428110    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:58.428149    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:58.428202    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:58.431087    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:58.431102    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:58.431109    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:58.431115    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:58 GMT
	I0425 12:32:58.431120    5918 round_trippers.go:580]     Audit-Id: a53af490-b532-4565-8edb-944f48de7f64
	I0425 12:32:58.431126    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:58.431147    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:58.431155    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:58.431611    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"867","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7012 chars]
	I0425 12:32:58.431965    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:58.431972    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:58.431978    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:58.431981    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:58.433208    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:58.433216    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:58.433221    5918 round_trippers.go:580]     Audit-Id: d6ef835e-4e3c-404c-ba33-261042773c0b
	I0425 12:32:58.433225    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:58.433229    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:58.433231    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:58.433234    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:58.433236    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:58 GMT
	I0425 12:32:58.433357    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:58.930005    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:32:58.930028    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:58.930040    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:58.930048    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:58.932753    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:58.932769    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:58.932780    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:58.932787    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:58.932794    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:58.932798    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:58.932802    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:59 GMT
	I0425 12:32:58.932807    5918 round_trippers.go:580]     Audit-Id: 0c2141bb-a72b-4ab0-8ac7-e8af58917c69
	I0425 12:32:58.933003    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0425 12:32:58.933378    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:58.933388    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:58.933396    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:58.933406    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:58.934925    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:58.934934    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:58.934940    5918 round_trippers.go:580]     Audit-Id: e887336b-ae8a-4f6b-9499-03a896eb2f76
	I0425 12:32:58.934942    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:58.934945    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:58.934947    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:58.934950    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:58.934953    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:59 GMT
	I0425 12:32:58.935144    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:58.935318    5918 pod_ready.go:92] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"True"
	I0425 12:32:58.935327    5918 pod_ready.go:81] duration metric: took 5.007558162s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:58.935346    5918 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:32:58.935378    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:32:58.935384    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:58.935389    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:58.935393    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:58.936587    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:58.936596    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:58.936601    5918 round_trippers.go:580]     Audit-Id: bcc99f73-0e29-4225-9253-0507a9a3714e
	I0425 12:32:58.936606    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:58.936610    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:58.936612    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:58.936623    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:58.936628    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:59 GMT
	I0425 12:32:58.936706    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:32:58.936934    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:58.936941    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:58.936946    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:58.936950    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:58.937889    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:32:58.937897    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:58.937902    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:58.937906    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:58.937909    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:58.937913    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:58.937916    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:59 GMT
	I0425 12:32:58.937919    5918 round_trippers.go:580]     Audit-Id: 5a3802d5-56f9-4318-bd85-4f7856f130e8
	I0425 12:32:58.938129    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:59.435564    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:32:59.458656    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:59.458673    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:59.458684    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:59.461483    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:32:59.461499    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:59.461506    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:59.461510    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:59.461521    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:59.461527    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:59.461530    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:59 GMT
	I0425 12:32:59.461534    5918 round_trippers.go:580]     Audit-Id: 3d4f0d57-9b30-4663-8f66-7d0574932ad3
	I0425 12:32:59.461655    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:32:59.462000    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:59.462010    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:59.462018    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:59.462023    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:59.463455    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:59.463467    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:59.463473    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:59.463476    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:32:59 GMT
	I0425 12:32:59.463482    5918 round_trippers.go:580]     Audit-Id: 03d967ef-b6fa-42da-9dd5-b97083e81eb3
	I0425 12:32:59.463484    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:59.463487    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:59.463489    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:59.463618    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:32:59.936831    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:32:59.936854    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:59.936867    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:59.936876    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:59.939930    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:32:59.939943    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:59.939949    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:59.939952    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:59.939955    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:59.939957    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:00 GMT
	I0425 12:32:59.939960    5918 round_trippers.go:580]     Audit-Id: 1a15d672-3abf-46bb-a709-fc769ce6f1f5
	I0425 12:32:59.939962    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:59.940029    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:32:59.940294    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:32:59.940301    5918 round_trippers.go:469] Request Headers:
	I0425 12:32:59.940325    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:32:59.940329    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:32:59.941840    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:32:59.941850    5918 round_trippers.go:577] Response Headers:
	I0425 12:32:59.941857    5918 round_trippers.go:580]     Audit-Id: 015b2e5d-598e-4021-8ac2-5d2c5bae9a30
	I0425 12:32:59.941862    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:32:59.941866    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:32:59.941871    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:32:59.941881    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:32:59.941885    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:00 GMT
	I0425 12:32:59.941993    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:00.436256    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:00.436271    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:00.436277    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:00.436281    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:00.438323    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:00.438345    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:00.438351    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:00 GMT
	I0425 12:33:00.438354    5918 round_trippers.go:580]     Audit-Id: e1389f89-7c67-4bb1-9ee6-567cc84cdedb
	I0425 12:33:00.438357    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:00.438359    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:00.438361    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:00.438365    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:00.438430    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:00.438685    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:00.438692    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:00.438698    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:00.438700    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:00.440071    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:00.440080    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:00.440085    5918 round_trippers.go:580]     Audit-Id: aaecd1c9-3650-4664-8cc6-5c836f438c8f
	I0425 12:33:00.440089    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:00.440092    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:00.440095    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:00.440097    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:00.440099    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:00 GMT
	I0425 12:33:00.440171    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:00.937188    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:00.937212    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:00.937223    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:00.937230    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:00.939967    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:00.939983    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:00.940002    5918 round_trippers.go:580]     Audit-Id: 0660a2e2-d710-47e4-8fe7-06655e844735
	I0425 12:33:00.940007    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:00.940010    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:00.940014    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:00.940017    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:00.940020    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:01 GMT
	I0425 12:33:00.940486    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:00.940843    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:00.940853    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:00.940860    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:00.940866    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:00.942252    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:00.942259    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:00.942264    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:00.942266    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:00.942268    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:01 GMT
	I0425 12:33:00.942271    5918 round_trippers.go:580]     Audit-Id: 75f8be81-9412-498e-b42d-6e7352f9a4a0
	I0425 12:33:00.942273    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:00.942276    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:00.942366    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:00.942554    5918 pod_ready.go:102] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"False"
	I0425 12:33:01.436059    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:01.436074    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:01.436083    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:01.436089    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:01.437866    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:01.437887    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:01.437899    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:01 GMT
	I0425 12:33:01.437907    5918 round_trippers.go:580]     Audit-Id: d900e9e2-cd4a-4986-a66b-b2524c9dfbf7
	I0425 12:33:01.437923    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:01.437927    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:01.437930    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:01.437932    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:01.438186    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:01.438442    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:01.438450    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:01.438455    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:01.438459    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:01.442193    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:01.442202    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:01.442207    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:01.442210    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:01.442213    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:01.442216    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:01.442219    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:01 GMT
	I0425 12:33:01.442223    5918 round_trippers.go:580]     Audit-Id: c754bc0a-b435-4184-b931-d638ceca166f
	I0425 12:33:01.442657    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:01.935856    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:01.935870    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:01.935876    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:01.935879    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:01.940342    5918 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 12:33:01.940354    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:01.940359    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:01.940362    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:01.940366    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:01.940369    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:01.940377    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:02 GMT
	I0425 12:33:01.940381    5918 round_trippers.go:580]     Audit-Id: c5abb75a-682b-4762-acae-f754f5fa99ff
	I0425 12:33:01.940470    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:01.940723    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:01.940730    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:01.940735    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:01.940738    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:01.942288    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:01.942297    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:01.942302    5918 round_trippers.go:580]     Audit-Id: c7c3c08e-12c7-460d-966e-86d0d12ff817
	I0425 12:33:01.942305    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:01.942308    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:01.942310    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:01.942312    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:01.942315    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:02 GMT
	I0425 12:33:01.942495    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:02.437236    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:02.437253    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:02.437262    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:02.437265    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:02.438909    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:02.438918    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:02.438923    5918 round_trippers.go:580]     Audit-Id: d1fea06f-c892-4b1e-b2f2-5db594271c74
	I0425 12:33:02.438928    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:02.438930    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:02.438933    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:02.438935    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:02.438937    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:02 GMT
	I0425 12:33:02.439019    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:02.439285    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:02.439292    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:02.439297    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:02.439300    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:02.440441    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:02.440450    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:02.440455    5918 round_trippers.go:580]     Audit-Id: 9b4dbff3-eb63-4021-b514-795f9b765697
	I0425 12:33:02.440464    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:02.440468    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:02.440470    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:02.440472    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:02.440475    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:02 GMT
	I0425 12:33:02.440739    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:02.937153    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:02.937177    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:02.937189    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:02.937196    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:02.939898    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:02.939913    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:02.939922    5918 round_trippers.go:580]     Audit-Id: 7a672a35-c19d-420f-811f-f1512dd862b1
	I0425 12:33:02.939930    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:02.939935    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:02.939941    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:02.939949    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:02.939953    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:03 GMT
	I0425 12:33:02.940106    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:02.940426    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:02.940436    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:02.940451    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:02.940467    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:02.941881    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:02.941889    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:02.941894    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:03 GMT
	I0425 12:33:02.941898    5918 round_trippers.go:580]     Audit-Id: 59586d93-6ecc-4bc3-a1b2-44e7050bd2fb
	I0425 12:33:02.941901    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:02.941905    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:02.941908    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:02.941911    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:02.941974    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:03.436836    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:03.436867    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:03.436879    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:03.436887    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:03.439544    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:03.439558    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:03.439565    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:03 GMT
	I0425 12:33:03.439571    5918 round_trippers.go:580]     Audit-Id: 3f398f44-ec41-46b0-a73f-554c83945a84
	I0425 12:33:03.439574    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:03.439577    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:03.439581    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:03.439585    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:03.439723    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:03.440107    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:03.440117    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:03.440125    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:03.440129    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:03.441771    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:03.441779    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:03.441784    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:03 GMT
	I0425 12:33:03.441787    5918 round_trippers.go:580]     Audit-Id: b791e186-6190-423c-ab03-d0158230b455
	I0425 12:33:03.441789    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:03.441792    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:03.441795    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:03.441798    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:03.442035    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:03.442215    5918 pod_ready.go:102] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"False"
	I0425 12:33:03.936129    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:03.936150    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:03.936162    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:03.936170    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:03.938781    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:03.938797    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:03.938804    5918 round_trippers.go:580]     Audit-Id: 09d5f6e4-e5d1-48ec-954b-49939e2c6c11
	I0425 12:33:03.938809    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:03.938812    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:03.938816    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:03.938819    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:03.938822    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:04 GMT
	I0425 12:33:03.938928    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:03.939273    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:03.939283    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:03.939291    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:03.939296    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:03.940892    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:03.940899    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:03.940904    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:03.940910    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:03.940916    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:04 GMT
	I0425 12:33:03.940920    5918 round_trippers.go:580]     Audit-Id: 2f7be176-09b9-468e-bfad-2b284b937463
	I0425 12:33:03.940923    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:03.940926    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:03.940996    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:04.435755    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:04.458598    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.458614    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.458623    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.461081    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:04.461103    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.461110    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.461126    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.461131    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:04 GMT
	I0425 12:33:04.461135    5918 round_trippers.go:580]     Audit-Id: 1e1d04a5-b559-4355-9d7b-cfe12fa22883
	I0425 12:33:04.461138    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.461142    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.461272    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"815","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0425 12:33:04.461617    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:04.461627    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.461635    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.461640    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.463107    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:04.463117    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.463123    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.463127    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.463129    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.463132    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:04 GMT
	I0425 12:33:04.463135    5918 round_trippers.go:580]     Audit-Id: dfc745a1-dbbe-45bb-9566-d3b42b837a4b
	I0425 12:33:04.463138    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.463267    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:04.936472    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:04.936494    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.936506    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.936513    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.939080    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:04.939099    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.939109    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.939117    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.939122    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.939166    5918 round_trippers.go:580]     Audit-Id: 5b72b131-8994-47a9-a2fd-8d49697f3790
	I0425 12:33:04.939174    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.939178    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.939282    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"885","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0425 12:33:04.939604    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:04.939614    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.939622    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.939626    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.940967    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:04.940976    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.940981    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.940983    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.940986    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.940988    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.940996    5918 round_trippers.go:580]     Audit-Id: 93d84a40-b4a9-4f73-9a00-ee1e8e325037
	I0425 12:33:04.940999    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.941069    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:04.941248    5918 pod_ready.go:92] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:04.941257    5918 pod_ready.go:81] duration metric: took 6.005722881s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.941267    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.941293    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-034000
	I0425 12:33:04.941297    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.941302    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.941305    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.942367    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:04.942376    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.942381    5918 round_trippers.go:580]     Audit-Id: 1b1d8878-acb9-4026-a8e3-3981b6dc9660
	I0425 12:33:04.942386    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.942389    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.942391    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.942394    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.942397    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.942469    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-034000","namespace":"kube-system","uid":"d142ad34-9a12-42f9-b92d-e0f968eaaa14","resourceVersion":"869","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.mirror":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.seen":"2024-04-25T19:24:03.349967563Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0425 12:33:04.942701    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:04.942708    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.942713    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.942722    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.943757    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:04.943766    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.943773    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.943779    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.943782    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.943786    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.943790    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.943794    5918 round_trippers.go:580]     Audit-Id: b19a0d68-9879-4c2b-ab9f-793eca619c41
	I0425 12:33:04.943907    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:04.944077    5918 pod_ready.go:92] pod "kube-apiserver-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:04.944084    5918 pod_ready.go:81] duration metric: took 2.812795ms for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.944090    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.944115    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-034000
	I0425 12:33:04.944119    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.944124    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.944128    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.945211    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:04.945219    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.945224    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.945228    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.945232    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.945236    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.945238    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.945241    5918 round_trippers.go:580]     Audit-Id: 0b190018-4228-4315-af5d-cdc48dcdef60
	I0425 12:33:04.945409    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-034000","namespace":"kube-system","uid":"19072fbe-3cb2-4b92-bd98-b549daec4cf2","resourceVersion":"862","creationTimestamp":"2024-04-25T19:24:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.mirror":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.seen":"2024-04-25T19:23:58.495195502Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0425 12:33:04.945647    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:04.945657    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.945663    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.945668    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.946667    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:04.946676    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.946683    5918 round_trippers.go:580]     Audit-Id: da5e1351-5986-4297-9534-a520cd8c179e
	I0425 12:33:04.946688    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.946694    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.946701    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.946706    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.946712    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.946972    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:04.947141    5918 pod_ready.go:92] pod "kube-controller-manager-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:04.947149    5918 pod_ready.go:81] duration metric: took 3.05393ms for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.947155    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.947185    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:33:04.947195    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.947201    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.947205    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.948101    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:04.948112    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.948117    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.948122    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.948125    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.948129    5918 round_trippers.go:580]     Audit-Id: a844165f-f73e-4179-add8-738b7088a942
	I0425 12:33:04.948132    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.948136    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.948366    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d8zc5","generateName":"kube-proxy-","namespace":"kube-system","uid":"feefb48f-5488-4adc-b7e8-47f5d92bd2f8","resourceVersion":"667","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0425 12:33:04.948590    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:33:04.948597    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.948602    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.948611    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.949535    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:04.949542    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.949547    5918 round_trippers.go:580]     Audit-Id: 5aedac58-201c-4122-916f-7498ba7024ac
	I0425 12:33:04.949551    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.949554    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.949557    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.949560    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.949563    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.949777    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"a08f7c72-c78c-42d9-aa96-d065a8c730b6","resourceVersion":"882","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_25_33_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3898 chars]
	I0425 12:33:04.949937    5918 pod_ready.go:97] node "multinode-034000-m03" hosting pod "kube-proxy-d8zc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000-m03" has status "Ready":"Unknown"
	I0425 12:33:04.949948    5918 pod_ready.go:81] duration metric: took 2.788067ms for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	E0425 12:33:04.949954    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000-m03" hosting pod "kube-proxy-d8zc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000-m03" has status "Ready":"Unknown"
	I0425 12:33:04.949963    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.949998    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:33:04.950003    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.950008    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.950012    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.950949    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:04.950957    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.950962    5918 round_trippers.go:580]     Audit-Id: 97316366-8ee7-42d1-8441-6e36b3f884cb
	I0425 12:33:04.950968    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.950974    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.950978    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.950981    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.950998    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.951100    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmspl","generateName":"kube-proxy-","namespace":"kube-system","uid":"b0f6c7c8-ef54-4c63-9de2-05e01ace3e15","resourceVersion":"842","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0425 12:33:04.985509    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:04.985579    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:04.985593    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:04.985602    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:04.988086    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:04.988107    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:04.988115    5918 round_trippers.go:580]     Audit-Id: 1a7feb59-43e9-4700-a163-cab129e20cc7
	I0425 12:33:04.988120    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:04.988123    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:04.988127    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:04.988130    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:04.988134    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:04.988238    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:04.988499    5918 pod_ready.go:92] pod "kube-proxy-gmspl" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:04.988511    5918 pod_ready.go:81] duration metric: took 38.541508ms for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:04.988519    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:05.187440    5918 request.go:629] Waited for 198.870339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:33:05.187537    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:33:05.187545    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:05.187554    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:05.187562    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:05.189868    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:05.189881    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:05.189888    5918 round_trippers.go:580]     Audit-Id: 242de2b4-7836-49a5-8430-c20d1f3b313e
	I0425 12:33:05.189896    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:05.189902    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:05.189907    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:05.189912    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:05.189917    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:05.190193    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mp7qm","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc106198-3317-44e2-b1a7-cc5eac6dcadc","resourceVersion":"479","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0425 12:33:05.386471    5918 request.go:629] Waited for 195.896376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:05.386528    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:05.386543    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:05.386578    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:05.386584    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:05.389337    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:05.389354    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:05.389362    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:05.389366    5918 round_trippers.go:580]     Audit-Id: 5db9e74e-6802-433c-beb4-7ff71b6a5021
	I0425 12:33:05.389370    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:05.389373    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:05.389377    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:05.389381    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:05.389467    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"cde39952-471f-4875-893f-0164a7600dc1","resourceVersion":"545","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_24_51_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0425 12:33:05.389696    5918 pod_ready.go:92] pod "kube-proxy-mp7qm" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:05.389707    5918 pod_ready.go:81] duration metric: took 401.170412ms for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:05.389715    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:05.586185    5918 request.go:629] Waited for 196.374599ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:33:05.586235    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:33:05.586293    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:05.586309    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:05.586317    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:05.590119    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:05.590132    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:05.590139    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:05.590143    5918 round_trippers.go:580]     Audit-Id: dbec22ad-940f-401e-8d76-e28247683024
	I0425 12:33:05.590147    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:05.590151    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:05.590155    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:05.590160    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:05.590306    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-034000","namespace":"kube-system","uid":"889fb9d4-d8d9-4a92-be22-d0ab1518bc93","resourceVersion":"870","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.mirror":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.seen":"2024-04-25T19:24:03.349969029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0425 12:33:05.786765    5918 request.go:629] Waited for 196.121849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:05.786831    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:05.786840    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:05.786851    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:05.786860    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:05.789373    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:05.789386    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:05.789401    5918 round_trippers.go:580]     Audit-Id: 2feb5006-1add-4493-82ea-60c39857769a
	I0425 12:33:05.789406    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:05.789415    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:05.789421    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:05.789424    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:05.789429    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:05.789507    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:05.789779    5918 pod_ready.go:92] pod "kube-scheduler-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:05.789791    5918 pod_ready.go:81] duration metric: took 400.056488ms for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:05.789799    5918 pod_ready.go:38] duration metric: took 11.866912026s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:33:05.789816    5918 api_server.go:52] waiting for apiserver process to appear ...
	I0425 12:33:05.789876    5918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:33:05.803342    5918 command_runner.go:130] > 1675
	I0425 12:33:05.803421    5918 api_server.go:72] duration metric: took 12.602816073s to wait for apiserver process to appear ...
	I0425 12:33:05.803432    5918 api_server.go:88] waiting for apiserver healthz status ...
	I0425 12:33:05.803443    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:33:05.806955    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:33:05.806993    5918 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0425 12:33:05.806998    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:05.807003    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:05.807008    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:05.807596    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:05.807604    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:05.807609    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:05.807613    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:05.807615    5918 round_trippers.go:580]     Content-Length: 263
	I0425 12:33:05.807618    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:05 GMT
	I0425 12:33:05.807621    5918 round_trippers.go:580]     Audit-Id: 375d9af6-3887-4d16-87c1-0bbe70aca84a
	I0425 12:33:05.807623    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:05.807626    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:05.807691    5918 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0425 12:33:05.807717    5918 api_server.go:141] control plane version: v1.30.0
	I0425 12:33:05.807730    5918 api_server.go:131] duration metric: took 4.293781ms to wait for apiserver health ...
	I0425 12:33:05.807736    5918 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 12:33:05.987426    5918 request.go:629] Waited for 179.642059ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:33:05.987534    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:33:05.987545    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:05.987557    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:05.987566    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:05.991508    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:05.991521    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:05.991527    5918 round_trippers.go:580]     Audit-Id: 6bdb565c-376f-4fbb-95d4-dd83eba07da7
	I0425 12:33:05.991530    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:05.991545    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:05.991548    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:05.991550    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:05.991552    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:06 GMT
	I0425 12:33:05.992573    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"885"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0425 12:33:05.994486    5918 system_pods.go:59] 12 kube-system pods found
	I0425 12:33:05.994497    5918 system_pods.go:61] "coredns-7db6d8ff4d-w5z5l" [21ddb5bc-fcf1-4ec4-9fdb-8595d406b302] Running
	I0425 12:33:05.994500    5918 system_pods.go:61] "etcd-multinode-034000" [fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5] Running
	I0425 12:33:05.994503    5918 system_pods.go:61] "kindnet-7ktv2" [957b7d0e-0754-481e-aa73-6772434e58e3] Running
	I0425 12:33:05.994507    5918 system_pods.go:61] "kindnet-gmxwj" [eb9b5a06-bd76-43b9-b8f9-8f5e1243769d] Running
	I0425 12:33:05.994510    5918 system_pods.go:61] "kindnet-spsv9" [fa2c70be-02ec-404a-9eb0-7862c49d8b3b] Running
	I0425 12:33:05.994512    5918 system_pods.go:61] "kube-apiserver-multinode-034000" [d142ad34-9a12-42f9-b92d-e0f968eaaa14] Running
	I0425 12:33:05.994516    5918 system_pods.go:61] "kube-controller-manager-multinode-034000" [19072fbe-3cb2-4b92-bd98-b549daec4cf2] Running
	I0425 12:33:05.994519    5918 system_pods.go:61] "kube-proxy-d8zc5" [feefb48f-5488-4adc-b7e8-47f5d92bd2f8] Running
	I0425 12:33:05.994522    5918 system_pods.go:61] "kube-proxy-gmspl" [b0f6c7c8-ef54-4c63-9de2-05e01ace3e15] Running
	I0425 12:33:05.994524    5918 system_pods.go:61] "kube-proxy-mp7qm" [cc106198-3317-44e2-b1a7-cc5eac6dcadc] Running
	I0425 12:33:05.994527    5918 system_pods.go:61] "kube-scheduler-multinode-034000" [889fb9d4-d8d9-4a92-be22-d0ab1518bc93] Running
	I0425 12:33:05.994529    5918 system_pods.go:61] "storage-provisioner" [89c78c52-dabe-4a5b-ac3b-0209ccb11139] Running
	I0425 12:33:05.994534    5918 system_pods.go:74] duration metric: took 186.788269ms to wait for pod list to return data ...
	I0425 12:33:05.994539    5918 default_sa.go:34] waiting for default service account to be created ...
	I0425 12:33:06.186356    5918 request.go:629] Waited for 191.754443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0425 12:33:06.186436    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0425 12:33:06.186446    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:06.186457    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:06.186465    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:06.189334    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:06.189348    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:06.189355    5918 round_trippers.go:580]     Audit-Id: c4191164-91cc-4526-83b9-71c76672a50e
	I0425 12:33:06.189373    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:06.189379    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:06.189385    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:06.189389    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:06.189393    5918 round_trippers.go:580]     Content-Length: 261
	I0425 12:33:06.189396    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:06 GMT
	I0425 12:33:06.189410    5918 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"885"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a557ca19-d109-4b3a-9af5-e9a633494e34","resourceVersion":"311","creationTimestamp":"2024-04-25T19:24:16Z"}}]}
	I0425 12:33:06.189568    5918 default_sa.go:45] found service account: "default"
	I0425 12:33:06.189581    5918 default_sa.go:55] duration metric: took 195.03068ms for default service account to be created ...
	I0425 12:33:06.189587    5918 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 12:33:06.385615    5918 request.go:629] Waited for 195.956129ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:33:06.385661    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:33:06.385669    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:06.385680    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:06.385687    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:06.389635    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:06.389648    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:06.389654    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:06.389660    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:06.389664    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:06.389667    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:06 GMT
	I0425 12:33:06.389671    5918 round_trippers.go:580]     Audit-Id: dce5d9fd-43e5-40c7-a63d-3bc6a568c690
	I0425 12:33:06.389674    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:06.390312    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"885"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0425 12:33:06.392269    5918 system_pods.go:86] 12 kube-system pods found
	I0425 12:33:06.392280    5918 system_pods.go:89] "coredns-7db6d8ff4d-w5z5l" [21ddb5bc-fcf1-4ec4-9fdb-8595d406b302] Running
	I0425 12:33:06.392284    5918 system_pods.go:89] "etcd-multinode-034000" [fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5] Running
	I0425 12:33:06.392288    5918 system_pods.go:89] "kindnet-7ktv2" [957b7d0e-0754-481e-aa73-6772434e58e3] Running
	I0425 12:33:06.392295    5918 system_pods.go:89] "kindnet-gmxwj" [eb9b5a06-bd76-43b9-b8f9-8f5e1243769d] Running
	I0425 12:33:06.392300    5918 system_pods.go:89] "kindnet-spsv9" [fa2c70be-02ec-404a-9eb0-7862c49d8b3b] Running
	I0425 12:33:06.392303    5918 system_pods.go:89] "kube-apiserver-multinode-034000" [d142ad34-9a12-42f9-b92d-e0f968eaaa14] Running
	I0425 12:33:06.392307    5918 system_pods.go:89] "kube-controller-manager-multinode-034000" [19072fbe-3cb2-4b92-bd98-b549daec4cf2] Running
	I0425 12:33:06.392311    5918 system_pods.go:89] "kube-proxy-d8zc5" [feefb48f-5488-4adc-b7e8-47f5d92bd2f8] Running
	I0425 12:33:06.392314    5918 system_pods.go:89] "kube-proxy-gmspl" [b0f6c7c8-ef54-4c63-9de2-05e01ace3e15] Running
	I0425 12:33:06.392317    5918 system_pods.go:89] "kube-proxy-mp7qm" [cc106198-3317-44e2-b1a7-cc5eac6dcadc] Running
	I0425 12:33:06.392320    5918 system_pods.go:89] "kube-scheduler-multinode-034000" [889fb9d4-d8d9-4a92-be22-d0ab1518bc93] Running
	I0425 12:33:06.392323    5918 system_pods.go:89] "storage-provisioner" [89c78c52-dabe-4a5b-ac3b-0209ccb11139] Running
	I0425 12:33:06.392350    5918 system_pods.go:126] duration metric: took 202.730233ms to wait for k8s-apps to be running ...
	I0425 12:33:06.392356    5918 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 12:33:06.392403    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:33:06.405144    5918 system_svc.go:56] duration metric: took 12.783641ms WaitForService to wait for kubelet
	I0425 12:33:06.405158    5918 kubeadm.go:576] duration metric: took 13.20453441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:33:06.405170    5918 node_conditions.go:102] verifying NodePressure condition ...
	I0425 12:33:06.585665    5918 request.go:629] Waited for 180.447854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0425 12:33:06.585754    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0425 12:33:06.585764    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:06.585775    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:06.585783    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:06.588895    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:06.588915    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:06.588926    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:06 GMT
	I0425 12:33:06.588935    5918 round_trippers.go:580]     Audit-Id: ff6d6011-a60a-4dd0-a1ec-0770d6119b29
	I0425 12:33:06.588942    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:06.588949    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:06.588961    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:06.588968    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:06.589241    5918 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"885"},"items":[{"metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14932 chars]
	I0425 12:33:06.589775    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:33:06.589785    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:33:06.589791    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:33:06.589794    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:33:06.589797    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:33:06.589801    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:33:06.589803    5918 node_conditions.go:105] duration metric: took 184.62503ms to run NodePressure ...
	I0425 12:33:06.589811    5918 start.go:240] waiting for startup goroutines ...
	I0425 12:33:06.589841    5918 start.go:245] waiting for cluster config update ...
	I0425 12:33:06.589850    5918 start.go:254] writing updated cluster config ...
	I0425 12:33:06.611935    5918 out.go:177] 
	I0425 12:33:06.633826    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:33:06.633949    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:33:06.656558    5918 out.go:177] * Starting "multinode-034000-m02" worker node in "multinode-034000" cluster
	I0425 12:33:06.698338    5918 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:33:06.698374    5918 cache.go:56] Caching tarball of preloaded images
	I0425 12:33:06.698603    5918 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:33:06.698623    5918 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:33:06.698756    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:33:06.699666    5918 start.go:360] acquireMachinesLock for multinode-034000-m02: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:33:06.699786    5918 start.go:364] duration metric: took 94.868µs to acquireMachinesLock for "multinode-034000-m02"
	I0425 12:33:06.699818    5918 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:33:06.699844    5918 fix.go:54] fixHost starting: m02
	I0425 12:33:06.700279    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:06.700313    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:06.709585    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53185
	I0425 12:33:06.709938    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:06.710254    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:06.710265    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:06.710501    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:06.710618    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:06.710705    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:33:06.710783    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:06.710864    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:33:06.711787    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid 5309 missing from process table
	I0425 12:33:06.711815    5918 fix.go:112] recreateIfNeeded on multinode-034000-m02: state=Stopped err=<nil>
	I0425 12:33:06.711826    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	W0425 12:33:06.711907    5918 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 12:33:06.733738    5918 out.go:177] * Restarting existing hyperkit VM for "multinode-034000-m02" ...
	I0425 12:33:06.775558    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .Start
	I0425 12:33:06.775779    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:06.775823    5918 main.go:141] libmachine: (multinode-034000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid
	I0425 12:33:06.776981    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid 5309 missing from process table
	I0425 12:33:06.777014    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | pid 5309 is in state "Stopped"
	I0425 12:33:06.777026    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid...
	I0425 12:33:06.777280    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Using UUID 94b40896-ddd7-48d5-b8c4-70380b6d3376
	I0425 12:33:06.802195    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Generated MAC 46:26:de:d7:8e:2e
	I0425 12:33:06.802217    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:33:06.802341    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94b40896-ddd7-48d5-b8c4-70380b6d3376", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aa9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:33:06.802388    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94b40896-ddd7-48d5-b8c4-70380b6d3376", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aa9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:33:06.802436    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "94b40896-ddd7-48d5-b8c4-70380b6d3376", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/multinode-034000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage,/Users/j
enkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:33:06.802485    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 94b40896-ddd7-48d5-b8c4-70380b6d3376 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/multinode-034000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/mult
inode-034000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:33:06.802500    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:33:06.803854    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 DEBUG: hyperkit: Pid is 5949
	I0425 12:33:06.804251    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Attempt 0
	I0425 12:33:06.804266    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:06.804331    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5949
	I0425 12:33:06.806295    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Searching for 46:26:de:d7:8e:2e in /var/db/dhcpd_leases ...
	I0425 12:33:06.806371    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0425 12:33:06.806390    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662c0151}
	I0425 12:33:06.806410    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:aa:be:2a:d5:f9:e ID:1,aa:be:2a:d5:f9:e Lease:0x662bffcd}
	I0425 12:33:06.806420    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:46:26:de:d7:8e:2e ID:1,46:26:de:d7:8e:2e Lease:0x662bff76}
	I0425 12:33:06.806431    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | Found match: 46:26:de:d7:8e:2e
	I0425 12:33:06.806441    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | IP: 192.169.0.17
	I0425 12:33:06.806501    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetConfigRaw
	I0425 12:33:06.807163    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:33:06.807358    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:33:06.807779    5918 machine.go:94] provisionDockerMachine start ...
	I0425 12:33:06.807789    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:06.807939    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:06.808037    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:06.808127    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:06.808218    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:06.808316    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:06.808437    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:06.808615    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:06.808622    5918 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 12:33:06.811718    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:33:06.820478    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:33:06.821462    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:33:06.821487    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:33:06.821519    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:33:06.821535    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:33:07.202595    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:33:07.202607    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:33:07.317209    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:33:07.317224    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:33:07.317258    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:33:07.317276    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:33:07.318154    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:33:07.318169    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:33:12.566672    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0425 12:33:12.566711    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0425 12:33:12.566719    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0425 12:33:12.590888    5918 main.go:141] libmachine: (multinode-034000-m02) DBG | 2024/04/25 12:33:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0425 12:33:41.866422    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 12:33:41.866439    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:33:41.866568    5918 buildroot.go:166] provisioning hostname "multinode-034000-m02"
	I0425 12:33:41.866578    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:33:41.866685    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:41.866786    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:41.866874    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:41.866972    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:41.867048    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:41.867181    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:41.867370    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:41.867378    5918 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000-m02 && echo "multinode-034000-m02" | sudo tee /etc/hostname
	I0425 12:33:41.930240    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000-m02
	
	I0425 12:33:41.930255    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:41.930383    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:41.930488    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:41.930575    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:41.930672    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:41.930801    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:41.930946    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:41.930958    5918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:33:41.988871    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:33:41.988887    5918 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:33:41.988896    5918 buildroot.go:174] setting up certificates
	I0425 12:33:41.988901    5918 provision.go:84] configureAuth start
	I0425 12:33:41.988908    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetMachineName
	I0425 12:33:41.989032    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:33:41.989127    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:41.989215    5918 provision.go:143] copyHostCerts
	I0425 12:33:41.989262    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:33:41.989322    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:33:41.989328    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:33:41.989486    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:33:41.989680    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:33:41.989720    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:33:41.989725    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:33:41.989837    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:33:41.989985    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:33:41.990023    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:33:41.990028    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:33:41.990111    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:33:41.990277    5918 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000-m02 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-034000-m02]
	I0425 12:33:42.130656    5918 provision.go:177] copyRemoteCerts
	I0425 12:33:42.130727    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:33:42.130751    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:42.130896    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:42.130988    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.131098    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:42.131174    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:33:42.165151    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:33:42.165222    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0425 12:33:42.184237    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:33:42.184299    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 12:33:42.203425    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:33:42.203520    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:33:42.222912    5918 provision.go:87] duration metric: took 233.990194ms to configureAuth
	I0425 12:33:42.222932    5918 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:33:42.223116    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:33:42.223137    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:42.223276    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:42.223368    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:42.223452    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.223525    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.223618    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:42.223746    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:42.223882    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:42.223890    5918 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:33:42.275737    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:33:42.275749    5918 buildroot.go:70] root file system type: tmpfs
	I0425 12:33:42.275834    5918 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:33:42.275848    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:42.275981    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:42.276091    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.276187    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.276285    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:42.276419    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:42.276562    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:42.276607    5918 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:33:42.338369    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:33:42.338385    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:42.338510    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:42.338598    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.338683    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:42.338768    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:42.338885    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:42.339025    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:42.339036    5918 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:33:43.868648    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:33:43.868664    5918 machine.go:97] duration metric: took 37.059765526s to provisionDockerMachine
	I0425 12:33:43.868673    5918 start.go:293] postStartSetup for "multinode-034000-m02" (driver="hyperkit")
	I0425 12:33:43.868683    5918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:33:43.868693    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:43.868877    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:33:43.868891    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:43.868987    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:43.869078    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:43.869170    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:43.869260    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:33:43.903375    5918 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:33:43.906446    5918 command_runner.go:130] > NAME=Buildroot
	I0425 12:33:43.906456    5918 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 12:33:43.906461    5918 command_runner.go:130] > ID=buildroot
	I0425 12:33:43.906465    5918 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 12:33:43.906469    5918 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 12:33:43.906613    5918 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:33:43.906624    5918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:33:43.906732    5918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:33:43.906911    5918 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:33:43.906917    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:33:43.907120    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:33:43.914337    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:33:43.934182    5918 start.go:296] duration metric: took 65.497529ms for postStartSetup
	I0425 12:33:43.934202    5918 fix.go:56] duration metric: took 37.233260037s for fixHost
	I0425 12:33:43.934223    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:43.934366    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:43.934459    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:43.934555    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:43.934644    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:43.934781    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:43.934915    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0425 12:33:43.934922    5918 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 12:33:43.986781    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073623.773562292
	
	I0425 12:33:43.986792    5918 fix.go:216] guest clock: 1714073623.773562292
	I0425 12:33:43.986798    5918 fix.go:229] Guest: 2024-04-25 12:33:43.773562292 -0700 PDT Remote: 2024-04-25 12:33:43.934213 -0700 PDT m=+79.541564900 (delta=-160.650708ms)
	I0425 12:33:43.986816    5918 fix.go:200] guest clock delta is within tolerance: -160.650708ms
	I0425 12:33:43.986821    5918 start.go:83] releasing machines lock for "multinode-034000-m02", held for 37.285905468s
	I0425 12:33:43.986841    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:43.986965    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:33:44.012575    5918 out.go:177] * Found network options:
	I0425 12:33:44.032145    5918 out.go:177]   - NO_PROXY=192.169.0.16
	W0425 12:33:44.053465    5918 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 12:33:44.053496    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:44.054237    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:44.054472    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	W0425 12:33:44.054739    5918 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 12:33:44.054832    5918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 12:33:44.054853    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:44.054895    5918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:33:44.054947    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:44.055023    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:44.055110    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:44.055188    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:44.055313    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:44.055375    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:44.055475    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:44.055609    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:33:44.055647    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:33:44.130629    5918 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 12:33:44.131477    5918 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 12:33:44.131509    5918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:33:44.131597    5918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:33:44.146222    5918 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0425 12:33:44.146344    5918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:33:44.146353    5918 start.go:494] detecting cgroup driver to use...
	I0425 12:33:44.146429    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:33:44.160898    5918 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0425 12:33:44.161128    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:33:44.169949    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:33:44.178886    5918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:33:44.178931    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:33:44.187889    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:33:44.197096    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:33:44.206506    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:33:44.215814    5918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:33:44.225238    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:33:44.234379    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:33:44.243333    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:33:44.252458    5918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:33:44.260515    5918 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 12:33:44.260719    5918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:33:44.268685    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:33:44.363463    5918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:33:44.382149    5918 start.go:494] detecting cgroup driver to use...
	I0425 12:33:44.382227    5918 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:33:44.397457    5918 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0425 12:33:44.397607    5918 command_runner.go:130] > [Unit]
	I0425 12:33:44.397619    5918 command_runner.go:130] > Description=Docker Application Container Engine
	I0425 12:33:44.397624    5918 command_runner.go:130] > Documentation=https://docs.docker.com
	I0425 12:33:44.397629    5918 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0425 12:33:44.397634    5918 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0425 12:33:44.397646    5918 command_runner.go:130] > StartLimitBurst=3
	I0425 12:33:44.397650    5918 command_runner.go:130] > StartLimitIntervalSec=60
	I0425 12:33:44.397656    5918 command_runner.go:130] > [Service]
	I0425 12:33:44.397660    5918 command_runner.go:130] > Type=notify
	I0425 12:33:44.397663    5918 command_runner.go:130] > Restart=on-failure
	I0425 12:33:44.397668    5918 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0425 12:33:44.397673    5918 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0425 12:33:44.397684    5918 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0425 12:33:44.397690    5918 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0425 12:33:44.397698    5918 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0425 12:33:44.397704    5918 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0425 12:33:44.397710    5918 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0425 12:33:44.397716    5918 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0425 12:33:44.397727    5918 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0425 12:33:44.397733    5918 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0425 12:33:44.397737    5918 command_runner.go:130] > ExecStart=
	I0425 12:33:44.397749    5918 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0425 12:33:44.397754    5918 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0425 12:33:44.397762    5918 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0425 12:33:44.397777    5918 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0425 12:33:44.397787    5918 command_runner.go:130] > LimitNOFILE=infinity
	I0425 12:33:44.397793    5918 command_runner.go:130] > LimitNPROC=infinity
	I0425 12:33:44.397797    5918 command_runner.go:130] > LimitCORE=infinity
	I0425 12:33:44.397801    5918 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0425 12:33:44.397806    5918 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0425 12:33:44.397810    5918 command_runner.go:130] > TasksMax=infinity
	I0425 12:33:44.397814    5918 command_runner.go:130] > TimeoutStartSec=0
	I0425 12:33:44.397819    5918 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0425 12:33:44.397823    5918 command_runner.go:130] > Delegate=yes
	I0425 12:33:44.397828    5918 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0425 12:33:44.397839    5918 command_runner.go:130] > KillMode=process
	I0425 12:33:44.397843    5918 command_runner.go:130] > [Install]
	I0425 12:33:44.397846    5918 command_runner.go:130] > WantedBy=multi-user.target
	I0425 12:33:44.397952    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:33:44.409107    5918 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:33:44.424496    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:33:44.434902    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:33:44.466247    5918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:33:44.487209    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:33:44.497510    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:33:44.512344    5918 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0425 12:33:44.512998    5918 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:33:44.515671    5918 command_runner.go:130] > /usr/bin/cri-dockerd
	I0425 12:33:44.515742    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:33:44.523271    5918 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:33:44.536834    5918 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:33:44.634970    5918 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:33:44.740585    5918 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:33:44.740622    5918 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:33:44.754694    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:33:44.851376    5918 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:33:47.038652    5918 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.187181308s)
	I0425 12:33:47.038713    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0425 12:33:47.049965    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:33:47.060472    5918 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0425 12:33:47.160678    5918 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0425 12:33:47.277660    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:33:47.388249    5918 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0425 12:33:47.401882    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:33:47.412759    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:33:47.504895    5918 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0425 12:33:47.564311    5918 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0425 12:33:47.564388    5918 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0425 12:33:47.568717    5918 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0425 12:33:47.568731    5918 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 12:33:47.568743    5918 command_runner.go:130] > Device: 0,22	Inode: 768         Links: 1
	I0425 12:33:47.568752    5918 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0425 12:33:47.568757    5918 command_runner.go:130] > Access: 2024-04-25 19:33:47.418832629 +0000
	I0425 12:33:47.568763    5918 command_runner.go:130] > Modify: 2024-04-25 19:33:47.418832629 +0000
	I0425 12:33:47.568767    5918 command_runner.go:130] > Change: 2024-04-25 19:33:47.419859496 +0000
	I0425 12:33:47.568770    5918 command_runner.go:130] >  Birth: -
	I0425 12:33:47.568780    5918 start.go:562] Will wait 60s for crictl version
	I0425 12:33:47.568821    5918 ssh_runner.go:195] Run: which crictl
	I0425 12:33:47.571811    5918 command_runner.go:130] > /usr/bin/crictl
	I0425 12:33:47.571979    5918 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 12:33:47.602415    5918 command_runner.go:130] > Version:  0.1.0
	I0425 12:33:47.602428    5918 command_runner.go:130] > RuntimeName:  docker
	I0425 12:33:47.602445    5918 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0425 12:33:47.602519    5918 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 12:33:47.603517    5918 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0425 12:33:47.603586    5918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:33:47.619066    5918 command_runner.go:130] > 26.0.2
	I0425 12:33:47.619853    5918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:33:47.636204    5918 command_runner.go:130] > 26.0.2
	I0425 12:33:47.660098    5918 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0425 12:33:47.702274    5918 out.go:177]   - env NO_PROXY=192.169.0.16
	I0425 12:33:47.724373    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:33:47.724748    5918 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0425 12:33:47.729375    5918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:33:47.739878    5918 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:33:47.740049    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:33:47.740275    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:47.740290    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:47.749280    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53206
	I0425 12:33:47.749638    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:47.749956    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:47.749967    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:47.750196    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:47.750318    5918 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:33:47.750398    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:47.750484    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:33:47.751424    5918 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:33:47.751680    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:47.751696    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:47.760347    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53208
	I0425 12:33:47.760682    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:47.761003    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:47.761015    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:47.761213    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:47.761317    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:33:47.761409    5918 certs.go:68] Setting up /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000 for IP: 192.169.0.17
	I0425 12:33:47.761415    5918 certs.go:194] generating shared ca certs ...
	I0425 12:33:47.761436    5918 certs.go:226] acquiring lock for ca certs: {Name:mk1f3cabc8bfb1fa57eb09572b98c6852173235a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:33:47.761609    5918 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key
	I0425 12:33:47.761681    5918 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key
	I0425 12:33:47.761691    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 12:33:47.761713    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 12:33:47.761736    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 12:33:47.761755    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 12:33:47.761842    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem (1338 bytes)
	W0425 12:33:47.761889    5918 certs.go:480] ignoring /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885_empty.pem, impossibly tiny 0 bytes
	I0425 12:33:47.761899    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 12:33:47.761934    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem (1078 bytes)
	I0425 12:33:47.761969    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem (1123 bytes)
	I0425 12:33:47.761999    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem (1675 bytes)
	I0425 12:33:47.762062    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:33:47.762093    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:33:47.762112    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem -> /usr/share/ca-certificates/1885.pem
	I0425 12:33:47.762129    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /usr/share/ca-certificates/18852.pem
	I0425 12:33:47.762153    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 12:33:47.781707    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 12:33:47.801280    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 12:33:47.820357    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 12:33:47.840075    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 12:33:47.859118    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem --> /usr/share/ca-certificates/1885.pem (1338 bytes)
	I0425 12:33:47.878008    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /usr/share/ca-certificates/18852.pem (1708 bytes)
	I0425 12:33:47.897010    5918 ssh_runner.go:195] Run: openssl version
	I0425 12:33:47.900968    5918 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 12:33:47.901167    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 12:33:47.910271    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:33:47.913511    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:33:47.913560    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:33:47.913595    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:33:47.917893    5918 command_runner.go:130] > b5213941
	I0425 12:33:47.918140    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 12:33:47.927286    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1885.pem && ln -fs /usr/share/ca-certificates/1885.pem /etc/ssl/certs/1885.pem"
	I0425 12:33:47.936252    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1885.pem
	I0425 12:33:47.939298    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:33:47.939473    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:33:47.939506    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1885.pem
	I0425 12:33:47.943476    5918 command_runner.go:130] > 51391683
	I0425 12:33:47.943661    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1885.pem /etc/ssl/certs/51391683.0"
	I0425 12:33:47.952689    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18852.pem && ln -fs /usr/share/ca-certificates/18852.pem /etc/ssl/certs/18852.pem"
	I0425 12:33:47.961794    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18852.pem
	I0425 12:33:47.964936    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:33:47.965093    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:33:47.965127    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18852.pem
	I0425 12:33:47.969076    5918 command_runner.go:130] > 3ec20f2e
	I0425 12:33:47.969248    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18852.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 12:33:47.978412    5918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 12:33:47.981394    5918 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:33:47.981413    5918 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:33:47.981444    5918 kubeadm.go:928] updating node {m02 192.169.0.17 8443 v1.30.0 docker false true} ...
	I0425 12:33:47.981499    5918 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-034000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 12:33:47.981539    5918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 12:33:47.989678    5918 command_runner.go:130] > kubeadm
	I0425 12:33:47.989687    5918 command_runner.go:130] > kubectl
	I0425 12:33:47.989694    5918 command_runner.go:130] > kubelet
	I0425 12:33:47.989709    5918 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 12:33:47.989752    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0425 12:33:47.997792    5918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0425 12:33:48.011280    5918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 12:33:48.024567    5918 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0425 12:33:48.027458    5918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:33:48.037845    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:33:48.135634    5918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:33:48.150698    5918 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:33:48.150997    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:48.151017    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:48.159904    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53210
	I0425 12:33:48.160306    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:48.160688    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:48.160705    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:48.160930    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:48.161048    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:33:48.161152    5918 start.go:316] joinCluster: &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:33:48.161234    5918 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:33:48.161254    5918 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:33:48.161530    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:48.161547    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:48.170783    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53212
	I0425 12:33:48.171262    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:48.171783    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:48.171798    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:48.172048    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:48.172188    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:33:48.172282    5918 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:33:48.172452    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:33:48.172672    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:48.172695    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:48.182344    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53214
	I0425 12:33:48.182812    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:48.183141    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:48.183154    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:48.183399    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:48.183523    5918 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:33:48.183607    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:48.183710    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:33:48.184706    5918 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:33:48.184982    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:48.185021    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:48.194051    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53216
	I0425 12:33:48.194417    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:48.194839    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:48.194857    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:48.195060    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:48.195184    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:33:48.195287    5918 api_server.go:166] Checking apiserver status ...
	I0425 12:33:48.195340    5918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:33:48.195350    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:33:48.195436    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:33:48.195551    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:33:48.195640    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:33:48.195712    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:33:48.231072    5918 command_runner.go:130] > 1675
	I0425 12:33:48.231137    5918 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup
	W0425 12:33:48.239827    5918 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:33:48.239893    5918 ssh_runner.go:195] Run: ls
	I0425 12:33:48.243244    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:33:48.246410    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:33:48.246473    5918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-034000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0425 12:33:48.334601    5918 command_runner.go:130] > node/multinode-034000-m02 cordoned
	I0425 12:33:51.351232    5918 command_runner.go:130] > pod "busybox-fc5497c4f-mw494" has DeletionTimestamp older than 1 seconds, skipping
	I0425 12:33:51.351257    5918 command_runner.go:130] > node/multinode-034000-m02 drained
	I0425 12:33:51.352838    5918 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-gmxwj, kube-system/kube-proxy-mp7qm
	I0425 12:33:51.352928    5918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-034000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.106349238s)
	I0425 12:33:51.352938    5918 node.go:128] successfully drained node "multinode-034000-m02"
	I0425 12:33:51.352958    5918 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0425 12:33:51.352972    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:33:51.353120    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:33:51.353211    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:33:51.353314    5918 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:33:51.353412    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:33:51.437235    5918 command_runner.go:130] > [preflight] Running pre-flight checks
	I0425 12:33:51.437410    5918 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0425 12:33:51.437418    5918 command_runner.go:130] > [reset] Stopping the kubelet service
	I0425 12:33:51.443367    5918 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0425 12:33:51.664776    5918 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0425 12:33:51.665602    5918 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0425 12:33:51.665613    5918 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0425 12:33:51.665667    5918 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0425 12:33:51.665676    5918 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0425 12:33:51.665682    5918 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0425 12:33:51.665687    5918 command_runner.go:130] > to reset your system's IPVS tables.
	I0425 12:33:51.665693    5918 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0425 12:33:51.665704    5918 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0425 12:33:51.666503    5918 command_runner.go:130] ! W0425 19:33:51.425519    1184 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0425 12:33:51.666531    5918 command_runner.go:130] ! W0425 19:33:51.655929    1184 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod c1318c0ed51edef78f64d2101ee20249a4086f01add09c52d20105887ffc00ea: output: E0425 19:33:51.551645    1214 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-mw494_default\" network: cni config uninitialized" podSandboxID="c1318c0ed51edef78f64d2101ee20249a4086f01add09c52d20105887ffc00ea"
	I0425 12:33:51.666547    5918 command_runner.go:130] ! time="2024-04-25T19:33:51Z" level=fatal msg="stopping the pod sandbox \"c1318c0ed51edef78f64d2101ee20249a4086f01add09c52d20105887ffc00ea\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-mw494_default\" network: cni config uninitialized"
	I0425 12:33:51.666555    5918 command_runner.go:130] ! : exit status 1
	I0425 12:33:51.666566    5918 node.go:155] successfully reset node "multinode-034000-m02"
	I0425 12:33:51.666843    5918 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:33:51.667059    5918 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc47ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:33:51.667328    5918 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0425 12:33:51.667360    5918 round_trippers.go:463] DELETE https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:51.667364    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:51.667370    5918 round_trippers.go:473]     Content-Type: application/json
	I0425 12:33:51.667374    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:51.667377    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:51.670129    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:51.670138    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:51.670143    5918 round_trippers.go:580]     Audit-Id: e8d2bac8-f762-47da-9648-efc6931586d5
	I0425 12:33:51.670147    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:51.670151    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:51.670153    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:51.670156    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:51.670158    5918 round_trippers.go:580]     Content-Length: 171
	I0425 12:33:51.670161    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:51 GMT
	I0425 12:33:51.670171    5918 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-034000-m02","kind":"nodes","uid":"cde39952-471f-4875-893f-0164a7600dc1"}}
	I0425 12:33:51.670191    5918 node.go:180] successfully deleted node "multinode-034000-m02"
	I0425 12:33:51.670198    5918 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:33:51.670218    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 12:33:51.670234    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:33:51.670376    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:33:51.670480    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:33:51.670572    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:33:51.670655    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:33:51.740529    5918 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token n5q0v8.0rae3jeds74ay7d7 --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 
	I0425 12:33:51.741528    5918 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:33:51.741545    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5q0v8.0rae3jeds74ay7d7 --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-034000-m02"
	I0425 12:33:51.771495    5918 command_runner.go:130] > [preflight] Running pre-flight checks
	I0425 12:33:51.861310    5918 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0425 12:33:51.861333    5918 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0425 12:33:51.895758    5918 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 12:33:51.895773    5918 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 12:33:51.895873    5918 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0425 12:33:52.004847    5918 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 12:33:52.498701    5918 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.187533ms
	I0425 12:33:52.498717    5918 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0425 12:33:53.002877    5918 command_runner.go:130] > This node has joined the cluster:
	I0425 12:33:53.002892    5918 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0425 12:33:53.002897    5918 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0425 12:33:53.002903    5918 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0425 12:33:53.004471    5918 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 12:33:53.004491    5918 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5q0v8.0rae3jeds74ay7d7 --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-034000-m02": (1.262894293s)
	I0425 12:33:53.004512    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 12:33:53.240678    5918 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0425 12:33:53.240759    5918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-034000-m02 minikube.k8s.io/updated_at=2024_04_25T12_33_53_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=multinode-034000 minikube.k8s.io/primary=false
	I0425 12:33:53.314159    5918 command_runner.go:130] > node/multinode-034000-m02 labeled
	I0425 12:33:53.315346    5918 start.go:318] duration metric: took 5.154038385s to joinCluster
	I0425 12:33:53.315390    5918 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:33:53.358079    5918 out.go:177] * Verifying Kubernetes components...
	I0425 12:33:53.315593    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:33:53.395092    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:33:53.511090    5918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:33:53.521774    5918 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:33:53.521977    5918 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc47ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:33:53.522151    5918 node_ready.go:35] waiting up to 6m0s for node "multinode-034000-m02" to be "Ready" ...
	I0425 12:33:53.522197    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:53.522201    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:53.522207    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:53.522211    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:53.523792    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:53.523806    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:53.523827    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:53 GMT
	I0425 12:33:53.523834    5918 round_trippers.go:580]     Audit-Id: 855953fd-e93a-4982-9624-76f85b09358b
	I0425 12:33:53.523837    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:53.523841    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:53.523844    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:53.523847    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:53.524025    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:54.022358    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:54.022379    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:54.022388    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:54.022393    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:54.024250    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:54.024262    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:54.024268    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:54.024272    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:54.024280    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:54 GMT
	I0425 12:33:54.024285    5918 round_trippers.go:580]     Audit-Id: 786bf82d-53ac-47dd-894c-371be479b774
	I0425 12:33:54.024287    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:54.024297    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:54.024354    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:54.522581    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:54.522600    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:54.522647    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:54.522654    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:54.524777    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:54.524788    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:54.524794    5918 round_trippers.go:580]     Audit-Id: b2c80dd7-95d9-4071-9171-25bc84ba1b64
	I0425 12:33:54.524798    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:54.524800    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:54.524803    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:54.524807    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:54.524810    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:54 GMT
	I0425 12:33:54.525110    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:55.022446    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:55.022479    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:55.022488    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:55.022496    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:55.024220    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:55.024231    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:55.024236    5918 round_trippers.go:580]     Audit-Id: 485f8bba-4866-4131-b71c-8c7cbcf5fe43
	I0425 12:33:55.024239    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:55.024245    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:55.024249    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:55.024251    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:55.024254    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:55 GMT
	I0425 12:33:55.024319    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:55.522792    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:55.522829    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:55.522841    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:55.522848    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:55.525353    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:55.525370    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:55.525378    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:55.525382    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:55.525389    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:55.525392    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:55.525395    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:55 GMT
	I0425 12:33:55.525398    5918 round_trippers.go:580]     Audit-Id: 326024cd-9352-471e-b8e5-79116da26c72
	I0425 12:33:55.525540    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:55.525752    5918 node_ready.go:53] node "multinode-034000-m02" has status "Ready":"False"
	I0425 12:33:56.022554    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:56.022573    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:56.022582    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:56.022587    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:56.024274    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:56.024284    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:56.024289    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:56.024293    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:56.024295    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:56.024298    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:56.024300    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:56 GMT
	I0425 12:33:56.024303    5918 round_trippers.go:580]     Audit-Id: 9dd8e3a0-2609-4504-a89e-a6f02dcd4b58
	I0425 12:33:56.024496    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:56.522578    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:56.522602    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:56.522613    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:56.522620    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:56.524855    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:56.524868    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:56.524874    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:56.524879    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:56.524882    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:56.524887    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:56.524891    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:56 GMT
	I0425 12:33:56.524894    5918 round_trippers.go:580]     Audit-Id: 966e8b0f-acf1-4182-a818-af9d78e3e418
	I0425 12:33:56.525129    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:57.022382    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:57.022398    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.022414    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.022418    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.024478    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:57.024488    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.024493    5918 round_trippers.go:580]     Audit-Id: 7e0afdbf-0e89-43e0-818d-e9f9812eaf6c
	I0425 12:33:57.024496    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.024499    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.024502    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.024518    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.024524    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.024610    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"958","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3563 chars]
	I0425 12:33:57.522860    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:57.522888    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.522899    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.522905    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.525767    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:57.525784    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.525795    5918 round_trippers.go:580]     Audit-Id: 99d2f743-548d-4878-a860-7bc739add105
	I0425 12:33:57.525799    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.525803    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.525806    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.525811    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.525816    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.525891    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"985","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3930 chars]
	I0425 12:33:57.526124    5918 node_ready.go:49] node "multinode-034000-m02" has status "Ready":"True"
	I0425 12:33:57.526136    5918 node_ready.go:38] duration metric: took 4.003851355s for node "multinode-034000-m02" to be "Ready" ...
	I0425 12:33:57.526144    5918 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:33:57.526190    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:33:57.526197    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.526204    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.526209    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.528865    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:57.528874    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.528879    5918 round_trippers.go:580]     Audit-Id: 7660d8a4-ba4c-41f2-9014-28c8a4d391bb
	I0425 12:33:57.528887    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.528891    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.528893    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.528895    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.528898    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.529671    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"987"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0425 12:33:57.531611    5918 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.531654    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:33:57.531658    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.531663    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.531668    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.533300    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:57.533310    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.533315    5918 round_trippers.go:580]     Audit-Id: 4db6bc68-1126-48df-a271-0019363af238
	I0425 12:33:57.533319    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.533322    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.533325    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.533328    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.533335    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.533455    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0425 12:33:57.533702    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:57.533709    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.533715    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.533719    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.534693    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:57.534700    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.534705    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.534709    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.534712    5918 round_trippers.go:580]     Audit-Id: 60541116-085c-4637-87cb-fe17e5d14fa7
	I0425 12:33:57.534714    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.534717    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.534719    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.534801    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:57.535008    5918 pod_ready.go:92] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:57.535018    5918 pod_ready.go:81] duration metric: took 3.397056ms for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.535027    5918 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.535059    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:33:57.535063    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.535069    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.535075    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.535983    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:57.535994    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.536000    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.536003    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.536006    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.536009    5918 round_trippers.go:580]     Audit-Id: 3eb879fd-6dde-40db-a5a2-cccb8ebbba43
	I0425 12:33:57.536012    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.536014    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.536190    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"885","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0425 12:33:57.536414    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:57.536421    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.536426    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.536430    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.537236    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:33:57.537244    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.537248    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.537250    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.537253    5918 round_trippers.go:580]     Audit-Id: fa1ec40f-6bdc-4ae4-a8a0-06dc3ae8ea53
	I0425 12:33:57.537256    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.537264    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.537268    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.537435    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:57.537613    5918 pod_ready.go:92] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:57.537624    5918 pod_ready.go:81] duration metric: took 2.591282ms for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.537634    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.537660    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-034000
	I0425 12:33:57.537664    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.537681    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.537688    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.539754    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:57.539762    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.539767    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.539770    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.539772    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.539775    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.539778    5918 round_trippers.go:580]     Audit-Id: 4b56d32d-e9e8-4b33-97c7-843da8e21e15
	I0425 12:33:57.539781    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.539903    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-034000","namespace":"kube-system","uid":"d142ad34-9a12-42f9-b92d-e0f968eaaa14","resourceVersion":"869","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.mirror":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.seen":"2024-04-25T19:24:03.349967563Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0425 12:33:57.540156    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:57.540164    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.540169    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.540173    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.543496    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:57.543505    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.543510    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.543513    5918 round_trippers.go:580]     Audit-Id: a5d6f05a-24a7-428d-a083-850bf3061b59
	I0425 12:33:57.543516    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.543521    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.543524    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.543530    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.543776    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:57.543960    5918 pod_ready.go:92] pod "kube-apiserver-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:57.543972    5918 pod_ready.go:81] duration metric: took 6.332158ms for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.543979    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.544010    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-034000
	I0425 12:33:57.544014    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.544020    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.544024    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.545375    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:57.545383    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.545387    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.545391    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.545393    5918 round_trippers.go:580]     Audit-Id: 8678c5e0-68a5-45d5-bc10-1aec836192a2
	I0425 12:33:57.545396    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.545399    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.545401    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.545608    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-034000","namespace":"kube-system","uid":"19072fbe-3cb2-4b92-bd98-b549daec4cf2","resourceVersion":"862","creationTimestamp":"2024-04-25T19:24:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.mirror":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.seen":"2024-04-25T19:23:58.495195502Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0425 12:33:57.545852    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:57.545859    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.545864    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.545868    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.546937    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:57.546945    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.546953    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.546958    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.546961    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.546964    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.546967    5918 round_trippers.go:580]     Audit-Id: 8848e7b7-15c0-49d1-8e14-bf4a37f9289a
	I0425 12:33:57.546969    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.547152    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:57.547315    5918 pod_ready.go:92] pod "kube-controller-manager-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:57.547324    5918 pod_ready.go:81] duration metric: took 3.339667ms for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.547332    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:57.723171    5918 request.go:629] Waited for 175.763924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:33:57.723324    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:33:57.723336    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.723347    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.723383    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.726391    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:57.726406    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.726413    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.726418    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.726422    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.726438    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.726447    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:57 GMT
	I0425 12:33:57.726453    5918 round_trippers.go:580]     Audit-Id: c0664c01-ac3d-4211-abad-39f320f58029
	I0425 12:33:57.726589    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d8zc5","generateName":"kube-proxy-","namespace":"kube-system","uid":"feefb48f-5488-4adc-b7e8-47f5d92bd2f8","resourceVersion":"667","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0425 12:33:57.922987    5918 request.go:629] Waited for 196.039017ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:33:57.923043    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:33:57.923052    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:57.923063    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:57.923071    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:57.926026    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:57.926041    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:57.926048    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:57.926053    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:57.926057    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:58 GMT
	I0425 12:33:57.926061    5918 round_trippers.go:580]     Audit-Id: 3bcf2b98-81ef-4fc6-b4fe-6b77eb45f9aa
	I0425 12:33:57.926064    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:57.926067    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:57.926331    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"a08f7c72-c78c-42d9-aa96-d065a8c730b6","resourceVersion":"882","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_25_33_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3898 chars]
	I0425 12:33:57.926568    5918 pod_ready.go:97] node "multinode-034000-m03" hosting pod "kube-proxy-d8zc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000-m03" has status "Ready":"Unknown"
	I0425 12:33:57.926585    5918 pod_ready.go:81] duration metric: took 379.236246ms for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	E0425 12:33:57.926595    5918 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-034000-m03" hosting pod "kube-proxy-d8zc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-034000-m03" has status "Ready":"Unknown"
	I0425 12:33:57.926606    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:58.122947    5918 request.go:629] Waited for 196.271875ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:33:58.122994    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:33:58.123001    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:58.123010    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:58.123016    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:58.125309    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:58.125329    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:58.125334    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:58.125342    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:58 GMT
	I0425 12:33:58.125345    5918 round_trippers.go:580]     Audit-Id: 301b16b3-c19c-4788-9231-84d1ce81064f
	I0425 12:33:58.125348    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:58.125350    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:58.125352    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:58.125501    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmspl","generateName":"kube-proxy-","namespace":"kube-system","uid":"b0f6c7c8-ef54-4c63-9de2-05e01ace3e15","resourceVersion":"842","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0425 12:33:58.322948    5918 request.go:629] Waited for 197.181395ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:58.322995    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:58.323001    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:58.323007    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:58.323011    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:58.324418    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:58.324430    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:58.324435    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:58 GMT
	I0425 12:33:58.324438    5918 round_trippers.go:580]     Audit-Id: fe814733-4cb1-4fa4-a807-48001e953c2d
	I0425 12:33:58.324442    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:58.324445    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:58.324453    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:58.324456    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:58.324599    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:58.324793    5918 pod_ready.go:92] pod "kube-proxy-gmspl" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:58.324802    5918 pod_ready.go:81] duration metric: took 398.178326ms for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:58.324809    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:58.524005    5918 request.go:629] Waited for 199.114634ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:33:58.524154    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:33:58.524167    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:58.524179    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:58.524187    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:58.527546    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:33:58.527560    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:58.527567    5918 round_trippers.go:580]     Audit-Id: 7770c46e-363e-4618-8389-9d26899652c0
	I0425 12:33:58.527572    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:58.527576    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:58.527581    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:58.527585    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:58.527589    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:58 GMT
	I0425 12:33:58.527676    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mp7qm","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc106198-3317-44e2-b1a7-cc5eac6dcadc","resourceVersion":"973","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0425 12:33:58.723305    5918 request.go:629] Waited for 195.255674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:58.723462    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:33:58.723474    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:58.723484    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:58.723492    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:58.726207    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:58.726236    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:58.726267    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:58.726279    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:58.726291    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:58 GMT
	I0425 12:33:58.726305    5918 round_trippers.go:580]     Audit-Id: e8425443-4753-4c99-afca-2405da70f60f
	I0425 12:33:58.726311    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:58.726315    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:58.726496    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"985","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3930 chars]
	I0425 12:33:58.726754    5918 pod_ready.go:92] pod "kube-proxy-mp7qm" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:58.726766    5918 pod_ready.go:81] duration metric: took 401.939368ms for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:58.726774    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:58.924859    5918 request.go:629] Waited for 198.029152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:33:58.924948    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:33:58.924960    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:58.924973    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:58.924979    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:58.927932    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:58.927947    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:58.927954    5918 round_trippers.go:580]     Audit-Id: 3818ae73-40b0-4086-9b9b-ba72a9c7d128
	I0425 12:33:58.927961    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:58.927964    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:58.927968    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:58.927971    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:58.927974    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:59 GMT
	I0425 12:33:58.928080    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-034000","namespace":"kube-system","uid":"889fb9d4-d8d9-4a92-be22-d0ab1518bc93","resourceVersion":"870","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.mirror":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.seen":"2024-04-25T19:24:03.349969029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0425 12:33:59.123978    5918 request.go:629] Waited for 195.60019ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:59.124021    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:33:59.124044    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:59.124051    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:59.124055    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:59.125585    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:33:59.125595    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:59.125601    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:59.125605    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:59.125609    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:59.125614    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:59 GMT
	I0425 12:33:59.125619    5918 round_trippers.go:580]     Audit-Id: 8eee5175-70a8-4690-bf0a-c2890dc9c931
	I0425 12:33:59.125622    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:59.125861    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:33:59.126060    5918 pod_ready.go:92] pod "kube-scheduler-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:33:59.126069    5918 pod_ready.go:81] duration metric: took 399.277632ms for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:33:59.126083    5918 pod_ready.go:38] duration metric: took 1.5998837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:33:59.126099    5918 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 12:33:59.126139    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:33:59.137274    5918 system_svc.go:56] duration metric: took 11.166248ms WaitForService to wait for kubelet
	I0425 12:33:59.137310    5918 kubeadm.go:576] duration metric: took 5.821727494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:33:59.137331    5918 node_conditions.go:102] verifying NodePressure condition ...
	I0425 12:33:59.323152    5918 request.go:629] Waited for 185.746155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0425 12:33:59.323191    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0425 12:33:59.323196    5918 round_trippers.go:469] Request Headers:
	I0425 12:33:59.323205    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:33:59.323210    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:33:59.326181    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:33:59.326202    5918 round_trippers.go:577] Response Headers:
	I0425 12:33:59.326208    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:33:59.326212    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:33:59 GMT
	I0425 12:33:59.326215    5918 round_trippers.go:580]     Audit-Id: 5697eefb-712f-4496-a690-e8e6c4882c44
	I0425 12:33:59.326217    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:33:59.326219    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:33:59.326221    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:33:59.326415    5918 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"988"},"items":[{"metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15038 chars]
	I0425 12:33:59.326820    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:33:59.326830    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:33:59.326839    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:33:59.326843    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:33:59.326849    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:33:59.326853    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:33:59.326856    5918 node_conditions.go:105] duration metric: took 189.514357ms to run NodePressure ...
	I0425 12:33:59.326864    5918 start.go:240] waiting for startup goroutines ...
	I0425 12:33:59.326880    5918 start.go:254] writing updated cluster config ...
	I0425 12:33:59.350410    5918 out.go:177] 
	I0425 12:33:59.371656    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:33:59.371781    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:33:59.393672    5918 out.go:177] * Starting "multinode-034000-m03" worker node in "multinode-034000" cluster
	I0425 12:33:59.436579    5918 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 12:33:59.436613    5918 cache.go:56] Caching tarball of preloaded images
	I0425 12:33:59.436820    5918 preload.go:173] Found /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0425 12:33:59.458526    5918 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 12:33:59.458672    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:33:59.459477    5918 start.go:360] acquireMachinesLock for multinode-034000-m03: {Name:mk3030f9170bc25c9124548f80d3e90a8c4abff5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 12:33:59.459600    5918 start.go:364] duration metric: took 100.684µs to acquireMachinesLock for "multinode-034000-m03"
	I0425 12:33:59.459625    5918 start.go:96] Skipping create...Using existing machine configuration
	I0425 12:33:59.459636    5918 fix.go:54] fixHost starting: m03
	I0425 12:33:59.460044    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:33:59.460070    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:33:59.469588    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53222
	I0425 12:33:59.469935    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:33:59.470271    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:33:59.470384    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:33:59.470627    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:33:59.470735    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:33:59.470813    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:33:59.470899    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:59.470984    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5609
	I0425 12:33:59.471919    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5609 missing from process table
	I0425 12:33:59.471942    5918 fix.go:112] recreateIfNeeded on multinode-034000-m03: state=Stopped err=<nil>
	I0425 12:33:59.471950    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	W0425 12:33:59.472037    5918 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 12:33:59.493479    5918 out.go:177] * Restarting existing hyperkit VM for "multinode-034000-m03" ...
	I0425 12:33:59.535621    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .Start
	I0425 12:33:59.535911    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:59.535961    5918 main.go:141] libmachine: (multinode-034000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid
	I0425 12:33:59.537793    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5609 missing from process table
	I0425 12:33:59.537816    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | pid 5609 is in state "Stopped"
	I0425 12:33:59.537842    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid...
	I0425 12:33:59.538185    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Using UUID c849e54d-01ec-4b42-86e6-91828949bf04
	I0425 12:33:59.564571    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Generated MAC aa:be:2a:d5:f9:e
	I0425 12:33:59.564592    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000
	I0425 12:33:59.564718    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c849e54d-01ec-4b42-86e6-91828949bf04", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2ea0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:33:59.564763    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c849e54d-01ec-4b42-86e6-91828949bf04", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2ea0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0425 12:33:59.564829    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c849e54d-01ec-4b42-86e6-91828949bf04", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/multinode-034000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage,/Users/j
enkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"}
	I0425 12:33:59.564878    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c849e54d-01ec-4b42-86e6-91828949bf04 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/multinode-034000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/tty,log=/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/bzimage,/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/mult
inode-034000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-034000"
	I0425 12:33:59.564903    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0425 12:33:59.566354    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 DEBUG: hyperkit: Pid is 5980
	I0425 12:33:59.566706    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Attempt 0
	I0425 12:33:59.566719    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:33:59.566800    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5980
	I0425 12:33:59.568556    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Searching for aa:be:2a:d5:f9:e in /var/db/dhcpd_leases ...
	I0425 12:33:59.568644    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0425 12:33:59.568669    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:46:26:de:d7:8e:2e ID:1,46:26:de:d7:8e:2e Lease:0x662c017b}
	I0425 12:33:59.568696    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:d3:c3:87:d3:c7 ID:1,1e:d3:c3:87:d3:c7 Lease:0x662c0151}
	I0425 12:33:59.568718    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:aa:be:2a:d5:f9:e ID:1,aa:be:2a:d5:f9:e Lease:0x662bffcd}
	I0425 12:33:59.568734    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | Found match: aa:be:2a:d5:f9:e
	I0425 12:33:59.568745    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | IP: 192.169.0.18
	I0425 12:33:59.568785    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetConfigRaw
	I0425 12:33:59.569495    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:33:59.569694    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/config.json ...
	I0425 12:33:59.570174    5918 machine.go:94] provisionDockerMachine start ...
	I0425 12:33:59.570186    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:33:59.570316    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:33:59.570414    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:33:59.570510    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:33:59.570590    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:33:59.570661    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:33:59.570800    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:33:59.570954    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:33:59.570961    5918 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 12:33:59.574310    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0425 12:33:59.582667    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0425 12:33:59.583578    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:33:59.583599    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:33:59.583607    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:33:59.583619    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:33:59.961549    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0425 12:33:59.961563    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:33:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0425 12:34:00.076249    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0425 12:34:00.076278    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0425 12:34:00.076292    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0425 12:34:00.076304    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0425 12:34:00.077141    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0425 12:34:00.077150    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0425 12:34:05.333138    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0425 12:34:05.333204    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0425 12:34:05.333214    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0425 12:34:05.357345    5918 main.go:141] libmachine: (multinode-034000-m03) DBG | 2024/04/25 12:34:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0425 12:34:10.630227    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 12:34:10.630242    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
	I0425 12:34:10.630391    5918 buildroot.go:166] provisioning hostname "multinode-034000-m03"
	I0425 12:34:10.630405    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
	I0425 12:34:10.630504    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:10.630589    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:10.630676    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:10.630757    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:10.630847    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:10.630996    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:34:10.631151    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:34:10.631160    5918 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-034000-m03 && echo "multinode-034000-m03" | sudo tee /etc/hostname
	I0425 12:34:10.694288    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-034000-m03
	
	I0425 12:34:10.694303    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:10.694437    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:10.694546    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:10.694646    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:10.694757    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:10.694895    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:34:10.695035    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:34:10.695046    5918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-034000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-034000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-034000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 12:34:10.753554    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 12:34:10.753577    5918 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18757-1425/.minikube CaCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18757-1425/.minikube}
	I0425 12:34:10.753589    5918 buildroot.go:174] setting up certificates
	I0425 12:34:10.753598    5918 provision.go:84] configureAuth start
	I0425 12:34:10.753606    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetMachineName
	I0425 12:34:10.753742    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:34:10.753845    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:10.753924    5918 provision.go:143] copyHostCerts
	I0425 12:34:10.753952    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:34:10.753999    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem, removing ...
	I0425 12:34:10.754005    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem
	I0425 12:34:10.754150    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.pem (1078 bytes)
	I0425 12:34:10.754381    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:34:10.754411    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem, removing ...
	I0425 12:34:10.754416    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem
	I0425 12:34:10.754518    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/cert.pem (1123 bytes)
	I0425 12:34:10.754689    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:34:10.754721    5918 exec_runner.go:144] found /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem, removing ...
	I0425 12:34:10.754727    5918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem
	I0425 12:34:10.754795    5918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18757-1425/.minikube/key.pem (1675 bytes)
	I0425 12:34:10.754951    5918 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem org=jenkins.multinode-034000-m03 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-034000-m03]
	I0425 12:34:10.976614    5918 provision.go:177] copyRemoteCerts
	I0425 12:34:10.976662    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 12:34:10.976676    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:10.976825    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:10.976926    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:10.977020    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:10.977117    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:11.009546    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 12:34:11.009623    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0425 12:34:11.029600    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 12:34:11.029682    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0425 12:34:11.053782    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 12:34:11.053858    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 12:34:11.078722    5918 provision.go:87] duration metric: took 325.105982ms to configureAuth
	I0425 12:34:11.078736    5918 buildroot.go:189] setting minikube options for container-runtime
	I0425 12:34:11.078916    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:34:11.078929    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:11.079081    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:11.079190    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:11.079301    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:11.079399    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:11.079487    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:11.079601    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:34:11.079729    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:34:11.079737    5918 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0425 12:34:11.130860    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0425 12:34:11.130872    5918 buildroot.go:70] root file system type: tmpfs
	I0425 12:34:11.130944    5918 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0425 12:34:11.130955    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:11.131087    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:11.131175    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:11.131284    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:11.131400    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:11.131522    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:34:11.131660    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:34:11.131706    5918 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	Environment="NO_PROXY=192.169.0.16,192.169.0.17"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0425 12:34:11.195262    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	Environment=NO_PROXY=192.169.0.16,192.169.0.17
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0425 12:34:11.195280    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:11.195428    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:11.195520    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:11.195620    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:11.195710    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:11.195839    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:34:11.195992    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:34:11.196004    5918 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0425 12:34:12.730068    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0425 12:34:12.730084    5918 machine.go:97] duration metric: took 13.159506523s to provisionDockerMachine
	I0425 12:34:12.730093    5918 start.go:293] postStartSetup for "multinode-034000-m03" (driver="hyperkit")
	I0425 12:34:12.730100    5918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 12:34:12.730110    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:12.730304    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 12:34:12.730320    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:12.730415    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:12.730498    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:12.730593    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:12.730674    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:12.765059    5918 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 12:34:12.767893    5918 command_runner.go:130] > NAME=Buildroot
	I0425 12:34:12.767908    5918 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 12:34:12.767915    5918 command_runner.go:130] > ID=buildroot
	I0425 12:34:12.767922    5918 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 12:34:12.767929    5918 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 12:34:12.768062    5918 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 12:34:12.768073    5918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/addons for local assets ...
	I0425 12:34:12.768158    5918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18757-1425/.minikube/files for local assets ...
	I0425 12:34:12.768305    5918 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> 18852.pem in /etc/ssl/certs
	I0425 12:34:12.768312    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /etc/ssl/certs/18852.pem
	I0425 12:34:12.768470    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 12:34:12.776564    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:34:12.796026    5918 start.go:296] duration metric: took 65.914221ms for postStartSetup
	I0425 12:34:12.796045    5918 fix.go:56] duration metric: took 13.336013848s for fixHost
	I0425 12:34:12.796059    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:12.796192    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:12.796287    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:12.796380    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:12.796459    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:12.796571    5918 main.go:141] libmachine: Using SSH client type: native
	I0425 12:34:12.796709    5918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc7a2b80] 0xc7a58e0 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0425 12:34:12.796717    5918 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 12:34:12.849540    5918 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073652.951560611
	
	I0425 12:34:12.849552    5918 fix.go:216] guest clock: 1714073652.951560611
	I0425 12:34:12.849557    5918 fix.go:229] Guest: 2024-04-25 12:34:12.951560611 -0700 PDT Remote: 2024-04-25 12:34:12.79605 -0700 PDT m=+108.402535917 (delta=155.510611ms)
	I0425 12:34:12.849573    5918 fix.go:200] guest clock delta is within tolerance: 155.510611ms
	I0425 12:34:12.849578    5918 start.go:83] releasing machines lock for "multinode-034000-m03", held for 13.389565152s
	I0425 12:34:12.849594    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:12.849725    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:34:12.873256    5918 out.go:177] * Found network options:
	I0425 12:34:12.895334    5918 out.go:177]   - NO_PROXY=192.169.0.16,192.169.0.17
	W0425 12:34:12.916131    5918 proxy.go:119] fail to check proxy env: Error ip not in block
	W0425 12:34:12.916175    5918 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 12:34:12.916210    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:12.917022    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:12.917285    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:12.917375    5918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 12:34:12.917428    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	W0425 12:34:12.917526    5918 proxy.go:119] fail to check proxy env: Error ip not in block
	W0425 12:34:12.917550    5918 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 12:34:12.917645    5918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 12:34:12.917663    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:12.917684    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:12.917859    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:12.917928    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:12.918072    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:12.918116    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:12.918317    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:12.918333    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:12.918483    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:12.948574    5918 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 12:34:12.948644    5918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 12:34:12.948696    5918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 12:34:12.995531    5918 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 12:34:12.995702    5918 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0425 12:34:12.995731    5918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 12:34:12.995742    5918 start.go:494] detecting cgroup driver to use...
	I0425 12:34:12.995816    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:34:13.011622    5918 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0425 12:34:13.011846    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0425 12:34:13.020819    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0425 12:34:13.029064    5918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0425 12:34:13.029117    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0425 12:34:13.037580    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:34:13.045964    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0425 12:34:13.054593    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0425 12:34:13.063080    5918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 12:34:13.075828    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0425 12:34:13.089184    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0425 12:34:13.098577    5918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0425 12:34:13.112432    5918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 12:34:13.126178    5918 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 12:34:13.126435    5918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 12:34:13.139323    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:34:13.231825    5918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0425 12:34:13.251715    5918 start.go:494] detecting cgroup driver to use...
	I0425 12:34:13.251783    5918 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0425 12:34:13.263511    5918 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0425 12:34:13.264021    5918 command_runner.go:130] > [Unit]
	I0425 12:34:13.264031    5918 command_runner.go:130] > Description=Docker Application Container Engine
	I0425 12:34:13.264035    5918 command_runner.go:130] > Documentation=https://docs.docker.com
	I0425 12:34:13.264040    5918 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0425 12:34:13.264044    5918 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0425 12:34:13.264048    5918 command_runner.go:130] > StartLimitBurst=3
	I0425 12:34:13.264052    5918 command_runner.go:130] > StartLimitIntervalSec=60
	I0425 12:34:13.264056    5918 command_runner.go:130] > [Service]
	I0425 12:34:13.264058    5918 command_runner.go:130] > Type=notify
	I0425 12:34:13.264062    5918 command_runner.go:130] > Restart=on-failure
	I0425 12:34:13.264065    5918 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0425 12:34:13.264070    5918 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16,192.169.0.17
	I0425 12:34:13.264076    5918 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0425 12:34:13.264085    5918 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0425 12:34:13.264091    5918 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0425 12:34:13.264096    5918 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0425 12:34:13.264102    5918 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0425 12:34:13.264108    5918 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0425 12:34:13.264117    5918 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0425 12:34:13.264123    5918 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0425 12:34:13.264129    5918 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0425 12:34:13.264132    5918 command_runner.go:130] > ExecStart=
	I0425 12:34:13.264147    5918 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0425 12:34:13.264151    5918 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0425 12:34:13.264165    5918 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0425 12:34:13.264174    5918 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0425 12:34:13.264179    5918 command_runner.go:130] > LimitNOFILE=infinity
	I0425 12:34:13.264185    5918 command_runner.go:130] > LimitNPROC=infinity
	I0425 12:34:13.264189    5918 command_runner.go:130] > LimitCORE=infinity
	I0425 12:34:13.264194    5918 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0425 12:34:13.264199    5918 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0425 12:34:13.264203    5918 command_runner.go:130] > TasksMax=infinity
	I0425 12:34:13.264207    5918 command_runner.go:130] > TimeoutStartSec=0
	I0425 12:34:13.264213    5918 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0425 12:34:13.264218    5918 command_runner.go:130] > Delegate=yes
	I0425 12:34:13.264226    5918 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0425 12:34:13.264230    5918 command_runner.go:130] > KillMode=process
	I0425 12:34:13.264233    5918 command_runner.go:130] > [Install]
	I0425 12:34:13.264236    5918 command_runner.go:130] > WantedBy=multi-user.target
	I0425 12:34:13.264352    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:34:13.276457    5918 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 12:34:13.290361    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 12:34:13.301380    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:34:13.312354    5918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0425 12:34:13.340283    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0425 12:34:13.350755    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 12:34:13.365530    5918 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0425 12:34:13.365782    5918 ssh_runner.go:195] Run: which cri-dockerd
	I0425 12:34:13.368547    5918 command_runner.go:130] > /usr/bin/cri-dockerd
	I0425 12:34:13.368726    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0425 12:34:13.375837    5918 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0425 12:34:13.389735    5918 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0425 12:34:13.484012    5918 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0425 12:34:13.589281    5918 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0425 12:34:13.589304    5918 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0425 12:34:13.604761    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:34:13.698681    5918 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0425 12:34:15.954970    5918 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.256203479s)
	I0425 12:34:15.955038    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0425 12:34:15.966053    5918 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0425 12:34:15.980628    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:34:15.990894    5918 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0425 12:34:16.082930    5918 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0425 12:34:16.183401    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:34:16.289015    5918 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0425 12:34:16.303009    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0425 12:34:16.314496    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:34:16.420877    5918 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0425 12:34:16.486066    5918 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0425 12:34:16.486141    5918 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0425 12:34:16.490663    5918 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0425 12:34:16.490676    5918 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 12:34:16.490684    5918 command_runner.go:130] > Device: 0,22	Inode: 743         Links: 1
	I0425 12:34:16.490692    5918 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0425 12:34:16.490706    5918 command_runner.go:130] > Access: 2024-04-25 19:34:16.536157574 +0000
	I0425 12:34:16.490718    5918 command_runner.go:130] > Modify: 2024-04-25 19:34:16.536157574 +0000
	I0425 12:34:16.490725    5918 command_runner.go:130] > Change: 2024-04-25 19:34:16.539157092 +0000
	I0425 12:34:16.490730    5918 command_runner.go:130] >  Birth: -
	I0425 12:34:16.490809    5918 start.go:562] Will wait 60s for crictl version
	I0425 12:34:16.490855    5918 ssh_runner.go:195] Run: which crictl
	I0425 12:34:16.493921    5918 command_runner.go:130] > /usr/bin/crictl
	I0425 12:34:16.494184    5918 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 12:34:16.519415    5918 command_runner.go:130] > Version:  0.1.0
	I0425 12:34:16.519429    5918 command_runner.go:130] > RuntimeName:  docker
	I0425 12:34:16.519433    5918 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0425 12:34:16.519437    5918 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 12:34:16.520388    5918 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0425 12:34:16.520454    5918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:34:16.536260    5918 command_runner.go:130] > 26.0.2
	I0425 12:34:16.537048    5918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0425 12:34:16.552360    5918 command_runner.go:130] > 26.0.2
	I0425 12:34:16.577029    5918 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0425 12:34:16.618008    5918 out.go:177]   - env NO_PROXY=192.169.0.16
	I0425 12:34:16.639313    5918 out.go:177]   - env NO_PROXY=192.169.0.16,192.169.0.17
	I0425 12:34:16.660050    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetIP
	I0425 12:34:16.660301    5918 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0425 12:34:16.663870    5918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:34:16.674212    5918 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:34:16.674377    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:34:16.674591    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:16.674606    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:16.683324    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53243
	I0425 12:34:16.683671    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:16.683988    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:34:16.683999    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:16.684199    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:16.684305    5918 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:34:16.684391    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:16.684466    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:34:16.685424    5918 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:16.685657    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:16.685672    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:16.694410    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53245
	I0425 12:34:16.694763    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:16.695091    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:34:16.695107    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:16.695302    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:16.695437    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:34:16.695542    5918 certs.go:68] Setting up /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000 for IP: 192.169.0.18
	I0425 12:34:16.695548    5918 certs.go:194] generating shared ca certs ...
	I0425 12:34:16.695560    5918 certs.go:226] acquiring lock for ca certs: {Name:mk1f3cabc8bfb1fa57eb09572b98c6852173235a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 12:34:16.695709    5918 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key
	I0425 12:34:16.695761    5918 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key
	I0425 12:34:16.695770    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 12:34:16.695794    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 12:34:16.695812    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 12:34:16.695829    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 12:34:16.695913    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem (1338 bytes)
	W0425 12:34:16.695954    5918 certs.go:480] ignoring /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885_empty.pem, impossibly tiny 0 bytes
	I0425 12:34:16.695970    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 12:34:16.696004    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/ca.pem (1078 bytes)
	I0425 12:34:16.696036    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/cert.pem (1123 bytes)
	I0425 12:34:16.696076    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/key.pem (1675 bytes)
	I0425 12:34:16.696145    5918 certs.go:484] found cert: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem (1708 bytes)
	I0425 12:34:16.696180    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem -> /usr/share/ca-certificates/1885.pem
	I0425 12:34:16.696207    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem -> /usr/share/ca-certificates/18852.pem
	I0425 12:34:16.696226    5918 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:34:16.696249    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 12:34:16.716114    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 12:34:16.735585    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 12:34:16.754766    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 12:34:16.774385    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/certs/1885.pem --> /usr/share/ca-certificates/1885.pem (1338 bytes)
	I0425 12:34:16.793693    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/ssl/certs/18852.pem --> /usr/share/ca-certificates/18852.pem (1708 bytes)
	I0425 12:34:16.813502    5918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 12:34:16.832980    5918 ssh_runner.go:195] Run: openssl version
	I0425 12:34:16.837109    5918 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 12:34:16.837346    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18852.pem && ln -fs /usr/share/ca-certificates/18852.pem /etc/ssl/certs/18852.pem"
	I0425 12:34:16.846469    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18852.pem
	I0425 12:34:16.849823    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:34:16.849933    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:39 /usr/share/ca-certificates/18852.pem
	I0425 12:34:16.849963    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18852.pem
	I0425 12:34:16.854025    5918 command_runner.go:130] > 3ec20f2e
	I0425 12:34:16.854205    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18852.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 12:34:16.863318    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 12:34:16.872383    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:34:16.875753    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:34:16.875867    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:34:16.875906    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 12:34:16.879983    5918 command_runner.go:130] > b5213941
	I0425 12:34:16.880131    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 12:34:16.889278    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1885.pem && ln -fs /usr/share/ca-certificates/1885.pem /etc/ssl/certs/1885.pem"
	I0425 12:34:16.898460    5918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1885.pem
	I0425 12:34:16.901760    5918 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:34:16.901882    5918 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:39 /usr/share/ca-certificates/1885.pem
	I0425 12:34:16.901915    5918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1885.pem
	I0425 12:34:16.906181    5918 command_runner.go:130] > 51391683
	I0425 12:34:16.906364    5918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1885.pem /etc/ssl/certs/51391683.0"
	I0425 12:34:16.915348    5918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 12:34:16.918495    5918 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:34:16.918510    5918 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 12:34:16.918541    5918 kubeadm.go:928] updating node {m03 192.169.0.18 0 v1.30.0 docker false true} ...
	I0425 12:34:16.918601    5918 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-034000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 12:34:16.918640    5918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 12:34:16.926651    5918 command_runner.go:130] > kubeadm
	I0425 12:34:16.926661    5918 command_runner.go:130] > kubectl
	I0425 12:34:16.926665    5918 command_runner.go:130] > kubelet
	I0425 12:34:16.926782    5918 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 12:34:16.926833    5918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0425 12:34:16.934769    5918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0425 12:34:16.948273    5918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 12:34:16.961858    5918 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0425 12:34:16.964775    5918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 12:34:16.974842    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:34:17.072636    5918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:34:17.086745    5918 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:17.087016    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:17.087041    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:17.095933    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53247
	I0425 12:34:17.096371    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:17.096697    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:34:17.096709    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:17.096954    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:17.097068    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:34:17.097148    5918 start.go:316] joinCluster: &{Name:multinode-034000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-034000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 12:34:17.097236    5918 start.go:329] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:34:17.097258    5918 host.go:66] Checking if "multinode-034000-m03" exists ...
	I0425 12:34:17.097518    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:17.097533    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:17.106435    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53249
	I0425 12:34:17.106785    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:17.107120    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:34:17.107131    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:17.107358    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:17.107484    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .DriverName
	I0425 12:34:17.107576    5918 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:34:17.107756    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:34:17.107983    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:17.108014    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:17.116698    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53251
	I0425 12:34:17.117070    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:17.117435    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:34:17.117461    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:17.117668    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:17.117783    5918 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:34:17.117864    5918 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:34:17.117960    5918 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5931
	I0425 12:34:17.119018    5918 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:34:17.119286    5918 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:34:17.119314    5918 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:34:17.128517    5918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53253
	I0425 12:34:17.128886    5918 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:34:17.129240    5918 main.go:141] libmachine: Using API Version  1
	I0425 12:34:17.129251    5918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:34:17.129494    5918 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:34:17.129627    5918 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:34:17.129729    5918 api_server.go:166] Checking apiserver status ...
	I0425 12:34:17.129803    5918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:34:17.129815    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:34:17.129919    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:34:17.130007    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:34:17.130112    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:34:17.130204    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:34:17.165370    5918 command_runner.go:130] > 1675
	I0425 12:34:17.165521    5918 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup
	W0425 12:34:17.173634    5918 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1675/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:34:17.173687    5918 ssh_runner.go:195] Run: ls
	I0425 12:34:17.178385    5918 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:34:17.182029    5918 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:34:17.182091    5918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-034000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0425 12:34:17.272641    5918 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-spsv9, kube-system/kube-proxy-d8zc5
	I0425 12:34:17.274152    5918 command_runner.go:130] > node/multinode-034000-m03 cordoned
	I0425 12:34:17.274162    5918 command_runner.go:130] > node/multinode-034000-m03 drained
	I0425 12:34:17.274250    5918 node.go:128] successfully drained node "multinode-034000-m03"
	I0425 12:34:17.274270    5918 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0425 12:34:17.274288    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHHostname
	I0425 12:34:17.274430    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHPort
	I0425 12:34:17.274533    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHKeyPath
	I0425 12:34:17.274622    5918 main.go:141] libmachine: (multinode-034000-m03) Calling .GetSSHUsername
	I0425 12:34:17.274704    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m03/id_rsa Username:docker}
	I0425 12:34:17.364633    5918 command_runner.go:130] > [preflight] Running pre-flight checks
	I0425 12:34:17.364797    5918 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0425 12:34:17.364827    5918 command_runner.go:130] > [reset] Stopping the kubelet service
	I0425 12:34:17.371606    5918 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0425 12:34:17.494764    5918 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0425 12:34:17.498364    5918 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0425 12:34:17.498410    5918 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0425 12:34:17.498422    5918 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0425 12:34:17.498438    5918 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0425 12:34:17.498444    5918 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0425 12:34:17.498449    5918 command_runner.go:130] > to reset your system's IPVS tables.
	I0425 12:34:17.498454    5918 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0425 12:34:17.498464    5918 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0425 12:34:17.499263    5918 command_runner.go:130] ! W0425 19:34:17.469340    1155 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0425 12:34:17.499279    5918 node.go:155] successfully reset node "multinode-034000-m03"
	I0425 12:34:17.499540    5918 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:34:17.499736    5918 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc47ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:34:17.499934    5918 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0425 12:34:17.499963    5918 round_trippers.go:463] DELETE https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:17.499967    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:17.499972    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:17.499977    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:17.499979    5918 round_trippers.go:473]     Content-Type: application/json
	I0425 12:34:17.502893    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:17.502906    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:17.502911    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:17.502918    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:17.502922    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:17.502925    5918 round_trippers.go:580]     Content-Length: 171
	I0425 12:34:17.502928    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:17 GMT
	I0425 12:34:17.502932    5918 round_trippers.go:580]     Audit-Id: 6676fafa-6ac0-4647-966c-70547f5700ad
	I0425 12:34:17.502934    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:17.502945    5918 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-034000-m03","kind":"nodes","uid":"a08f7c72-c78c-42d9-aa96-d065a8c730b6"}}
	I0425 12:34:17.502964    5918 node.go:180] successfully deleted node "multinode-034000-m03"
	I0425 12:34:17.502971    5918 start.go:333] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:34:17.502988    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 12:34:17.503002    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:34:17.503156    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:34:17.503257    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:34:17.503346    5918 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:34:17.503461    5918 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:34:17.581705    5918 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2ur7jc.jxlo072yq1uro1w4 --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 
	I0425 12:34:17.584731    5918 start.go:342] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:34:17.584755    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2ur7jc.jxlo072yq1uro1w4 --discovery-token-ca-cert-hash sha256:00651354ee141ab473da454fccfa896339ebbff71705c055a7dbbfb8ae906871 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-034000-m03"
	I0425 12:34:17.614237    5918 command_runner.go:130] > [preflight] Running pre-flight checks
	I0425 12:34:17.710316    5918 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0425 12:34:17.710342    5918 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0425 12:34:17.743307    5918 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 12:34:17.743323    5918 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 12:34:17.743329    5918 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0425 12:34:17.849883    5918 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 12:34:18.359562    5918 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 509.993457ms
	I0425 12:34:18.359580    5918 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0425 12:34:18.369697    5918 command_runner.go:130] > This node has joined the cluster:
	I0425 12:34:18.369713    5918 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0425 12:34:18.369720    5918 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0425 12:34:18.369725    5918 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0425 12:34:18.371404    5918 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 12:34:18.371512    5918 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 12:34:18.580079    5918 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0425 12:34:18.580145    5918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-034000-m03 minikube.k8s.io/updated_at=2024_04_25T12_34_18_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=multinode-034000 minikube.k8s.io/primary=false
	I0425 12:34:18.648518    5918 command_runner.go:130] > node/multinode-034000-m03 labeled
	I0425 12:34:18.648543    5918 start.go:318] duration metric: took 1.551348971s to joinCluster
	I0425 12:34:18.648585    5918 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.169.0.18 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0425 12:34:18.671982    5918 out.go:177] * Verifying Kubernetes components...
	I0425 12:34:18.648767    5918 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:34:18.729828    5918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 12:34:18.834726    5918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 12:34:18.847125    5918 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 12:34:18.847322    5918 kapi.go:59] client config for multinode-034000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/multinode-034000/client.key", CAFile:"/Users/jenkins/minikube-integration/18757-1425/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdc47ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 12:34:18.847501    5918 node_ready.go:35] waiting up to 6m0s for node "multinode-034000-m03" to be "Ready" ...
	I0425 12:34:18.847548    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:18.847553    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:18.847558    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:18.847562    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:18.849135    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:18.849145    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:18.849150    5918 round_trippers.go:580]     Audit-Id: 7948cfce-e0b9-4850-9714-2ad777cf2cbb
	I0425 12:34:18.849154    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:18.849157    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:18.849160    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:18.849164    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:18.849166    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:19 GMT
	I0425 12:34:18.849367    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:19.348425    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:19.348445    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:19.348456    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:19.348463    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:19.351015    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:19.351028    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:19.351035    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:19.351039    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:19.351042    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:19.351045    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:19.351047    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:19 GMT
	I0425 12:34:19.351051    5918 round_trippers.go:580]     Audit-Id: bd89904c-4f71-48f8-83d0-2829cf14d509
	I0425 12:34:19.351536    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:19.847982    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:19.848003    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:19.848015    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:19.848028    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:19.850032    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:19.850042    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:19.850049    5918 round_trippers.go:580]     Audit-Id: c77211b3-b70e-4f16-a2c4-cd7497ffec3a
	I0425 12:34:19.850053    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:19.850056    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:19.850059    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:19.850063    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:19.850067    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:20 GMT
	I0425 12:34:19.850173    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:20.348396    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:20.348454    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:20.348468    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:20.348476    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:20.350366    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:20.350380    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:20.350388    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:20 GMT
	I0425 12:34:20.350392    5918 round_trippers.go:580]     Audit-Id: 77c26e80-eb6d-4ad4-8553-7dacaf8cd8f7
	I0425 12:34:20.350395    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:20.350398    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:20.350402    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:20.350404    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:20.350578    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:20.847722    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:20.847740    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:20.847747    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:20.847750    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:20.850992    5918 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 12:34:20.851009    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:20.851016    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:20.851020    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:20.851026    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:20.851029    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:20.851032    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:21 GMT
	I0425 12:34:20.851037    5918 round_trippers.go:580]     Audit-Id: ce0d5334-6895-4261-83f4-6add7a2d2b4c
	I0425 12:34:20.851159    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:20.851328    5918 node_ready.go:53] node "multinode-034000-m03" has status "Ready":"False"
	I0425 12:34:21.347952    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:21.347965    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:21.347971    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:21.347975    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:21.349392    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:21.349404    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:21.349412    5918 round_trippers.go:580]     Audit-Id: cea9e15d-4f3c-4ddd-85b3-a8d02f096997
	I0425 12:34:21.349418    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:21.349423    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:21.349440    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:21.349447    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:21.349450    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:21 GMT
	I0425 12:34:21.349566    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:21.847877    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:21.847903    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:21.847931    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:21.847937    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:21.849862    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:21.849879    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:21.849884    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:21.849887    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:21.849889    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:21.849892    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:21.849895    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:22 GMT
	I0425 12:34:21.849898    5918 round_trippers.go:580]     Audit-Id: 6471eb27-ee44-4a88-9cb7-6cbc8844d09c
	I0425 12:34:21.849961    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1044","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3395 chars]
	I0425 12:34:22.347868    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:22.347886    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:22.347897    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:22.347904    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:22.350111    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:22.350128    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:22.350138    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:22.350148    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:22 GMT
	I0425 12:34:22.350156    5918 round_trippers.go:580]     Audit-Id: 7983c72e-a9d6-45e8-9142-4ae00c21d56c
	I0425 12:34:22.350161    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:22.350165    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:22.350170    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:22.350424    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:22.847937    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:22.847959    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:22.847972    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:22.847979    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:22.850028    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:22.850041    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:22.850048    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:22.850053    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:23 GMT
	I0425 12:34:22.850058    5918 round_trippers.go:580]     Audit-Id: 7c24a205-0abb-4cd3-80de-df2747113f44
	I0425 12:34:22.850063    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:22.850069    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:22.850074    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:22.850340    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:23.349829    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:23.349851    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:23.349866    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:23.349872    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:23.352049    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:23.352065    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:23.352071    5918 round_trippers.go:580]     Audit-Id: c1bfedc2-195a-4620-9d27-972463165b53
	I0425 12:34:23.352074    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:23.352098    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:23.352105    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:23.352108    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:23.352112    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:23 GMT
	I0425 12:34:23.352193    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:23.352362    5918 node_ready.go:53] node "multinode-034000-m03" has status "Ready":"False"
	I0425 12:34:23.848598    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:23.848613    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:23.848622    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:23.848627    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:23.850773    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:23.850785    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:23.850793    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:23.850797    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:23.850802    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:24 GMT
	I0425 12:34:23.850805    5918 round_trippers.go:580]     Audit-Id: 5f7ed6a3-acbd-42da-bf80-641093d91995
	I0425 12:34:23.850810    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:23.850814    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:23.850988    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:24.348327    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:24.348347    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:24.348358    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:24.348365    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:24.350811    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:24.350832    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:24.350840    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:24.350855    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:24.350860    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:24.350864    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:24 GMT
	I0425 12:34:24.350867    5918 round_trippers.go:580]     Audit-Id: b676cf38-d29c-49d3-9c27-17e25c07e56b
	I0425 12:34:24.350870    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:24.351040    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:24.848652    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:24.848682    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:24.848694    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:24.848699    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:24.850827    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:24.850843    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:24.850850    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:24.850854    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:25 GMT
	I0425 12:34:24.850857    5918 round_trippers.go:580]     Audit-Id: fc83434d-3aa5-4cf0-ad76-4e516df2b489
	I0425 12:34:24.850861    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:24.850864    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:24.850868    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:24.851014    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:25.349674    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:25.349701    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:25.349714    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:25.349720    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:25.352590    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:25.352607    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:25.352613    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:25 GMT
	I0425 12:34:25.352618    5918 round_trippers.go:580]     Audit-Id: 7928eca7-ffc7-46b5-baa7-fff8fd44093e
	I0425 12:34:25.352624    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:25.352627    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:25.352649    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:25.352658    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:25.352768    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:25.352991    5918 node_ready.go:53] node "multinode-034000-m03" has status "Ready":"False"
	I0425 12:34:25.847966    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:25.847999    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:25.848007    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:25.848012    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:25.849495    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:25.849507    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:25.849515    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:25.849521    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:26 GMT
	I0425 12:34:25.849525    5918 round_trippers.go:580]     Audit-Id: ba084865-26c0-4eb9-a903-f59297e43e13
	I0425 12:34:25.849528    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:25.849532    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:25.849536    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:25.849598    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:26.348973    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:26.348995    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.349006    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.349017    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.351447    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:26.351460    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.351466    5918 round_trippers.go:580]     Audit-Id: c91ab1c4-0cf7-4ef0-b4eb-5b3516f7b5cd
	I0425 12:34:26.351470    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.351473    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.351477    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.351481    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.351485    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:26 GMT
	I0425 12:34:26.351586    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1064","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3504 chars]
	I0425 12:34:26.848837    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:26.848856    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.848865    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.848870    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.850412    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.850422    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.850427    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.850431    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.850441    5918 round_trippers.go:580]     Audit-Id: 05fb6ae8-a1b9-47ab-beb6-c9f78cdee531
	I0425 12:34:26.850444    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.850447    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.850465    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.850590    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1070","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3762 chars]
	I0425 12:34:26.850757    5918 node_ready.go:49] node "multinode-034000-m03" has status "Ready":"True"
	I0425 12:34:26.850766    5918 node_ready.go:38] duration metric: took 8.003014116s for node "multinode-034000-m03" to be "Ready" ...
	I0425 12:34:26.850779    5918 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:34:26.850812    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0425 12:34:26.850841    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.850846    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.850849    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.852737    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.852748    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.852757    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.852764    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.852768    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.852772    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.852775    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.852778    5918 round_trippers.go:580]     Audit-Id: 3ff47412-00e1-44c6-9aff-a10a13cf8067
	I0425 12:34:26.853440    5918 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1070"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 85988 chars]
	I0425 12:34:26.855331    5918 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.855375    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w5z5l
	I0425 12:34:26.855380    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.855385    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.855389    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.858175    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:26.858186    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.858191    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.858195    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.858199    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.858202    5918 round_trippers.go:580]     Audit-Id: 36c956f7-6ec7-4701-ac58-00f004620e2d
	I0425 12:34:26.858204    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.858208    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.858591    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-w5z5l","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"21ddb5bc-fcf1-4ec4-9fdb-8595d406b302","resourceVersion":"871","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0f6fd182-365c-4265-af59-803da1fb2bab","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f6fd182-365c-4265-af59-803da1fb2bab\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0425 12:34:26.858879    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:26.858888    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.858893    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.858897    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.860625    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.860636    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.860642    5918 round_trippers.go:580]     Audit-Id: 10128468-84db-4797-a913-90080872e73a
	I0425 12:34:26.860651    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.860655    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.860657    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.860660    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.860663    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.860743    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:34:26.860929    5918 pod_ready.go:92] pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:26.860945    5918 pod_ready.go:81] duration metric: took 5.603ms for pod "coredns-7db6d8ff4d-w5z5l" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.860952    5918 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.860990    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-034000
	I0425 12:34:26.860995    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.861000    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.861005    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.862165    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.862173    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.862177    5918 round_trippers.go:580]     Audit-Id: 92629be6-8cf1-4d35-8026-64743a2db22b
	I0425 12:34:26.862181    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.862184    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.862187    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.862190    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.862195    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.862379    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-034000","namespace":"kube-system","uid":"fc3cbeb4-6bc0-4ee1-b4b9-75eceb7b2ed5","resourceVersion":"885","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.mirror":"869e6afbe5edef90b0d9db60a8203a84","kubernetes.io/config.seen":"2024-04-25T19:24:03.349964798Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0425 12:34:26.862614    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:26.862621    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.862626    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.862629    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.863885    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.863894    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.863900    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.863908    5918 round_trippers.go:580]     Audit-Id: dda2750f-3614-4c9e-a027-0f01ec46c25f
	I0425 12:34:26.863914    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.863916    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.863919    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.863921    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.864097    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:34:26.864293    5918 pod_ready.go:92] pod "etcd-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:26.864302    5918 pod_ready.go:81] duration metric: took 3.344593ms for pod "etcd-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.864311    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.864340    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-034000
	I0425 12:34:26.864345    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.864351    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.864353    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.865229    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:34:26.865235    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.865239    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.865242    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.865247    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.865252    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.865255    5918 round_trippers.go:580]     Audit-Id: ed0c9a3a-e87b-4441-95a1-76b72b8f776d
	I0425 12:34:26.865259    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.865493    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-034000","namespace":"kube-system","uid":"d142ad34-9a12-42f9-b92d-e0f968eaaa14","resourceVersion":"869","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.mirror":"d809c763efd59e895582aab9f4e65d83","kubernetes.io/config.seen":"2024-04-25T19:24:03.349967563Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0425 12:34:26.865727    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:26.865734    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.865740    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.865743    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.866900    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.866910    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.866916    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.866919    5918 round_trippers.go:580]     Audit-Id: 19a84f56-a90b-4cb8-890e-86b9e052cc46
	I0425 12:34:26.866922    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.866925    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.866929    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.866930    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.867111    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:34:26.867282    5918 pod_ready.go:92] pod "kube-apiserver-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:26.867290    5918 pod_ready.go:81] duration metric: took 2.973627ms for pod "kube-apiserver-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.867296    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.867334    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-034000
	I0425 12:34:26.867339    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.867344    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.867348    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.868403    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:26.868410    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.868414    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.868419    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.868423    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.868427    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.868431    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.868433    5918 round_trippers.go:580]     Audit-Id: ae35adb5-6eb8-4be6-a63c-6b4c5ebc81f6
	I0425 12:34:26.868557    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-034000","namespace":"kube-system","uid":"19072fbe-3cb2-4b92-bd98-b549daec4cf2","resourceVersion":"862","creationTimestamp":"2024-04-25T19:24:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.mirror":"8241dd03fc3448a4525ccebdefebf535","kubernetes.io/config.seen":"2024-04-25T19:23:58.495195502Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0425 12:34:26.868802    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:26.868811    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:26.868816    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:26.868820    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:26.869666    5918 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 12:34:26.869674    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:26.869683    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:26.869688    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:26.869691    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:26.869694    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:26.869702    5918 round_trippers.go:580]     Audit-Id: 1635fe6b-4844-4477-8786-b4f64a66b424
	I0425 12:34:26.869708    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:26.869807    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:34:26.869977    5918 pod_ready.go:92] pod "kube-controller-manager-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:26.869985    5918 pod_ready.go:81] duration metric: took 2.683831ms for pod "kube-controller-manager-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:26.870007    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:27.050933    5918 request.go:629] Waited for 180.859434ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:34:27.050985    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8zc5
	I0425 12:34:27.051030    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:27.051042    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:27.051050    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:27.053436    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:27.053449    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:27.053456    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:27.053460    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:27.053464    5918 round_trippers.go:580]     Audit-Id: 6c55ff07-68d2-4071-9128-0bd1105d9138
	I0425 12:34:27.053469    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:27.053485    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:27.053490    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:27.054167    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d8zc5","generateName":"kube-proxy-","namespace":"kube-system","uid":"feefb48f-5488-4adc-b7e8-47f5d92bd2f8","resourceVersion":"1060","creationTimestamp":"2024-04-25T19:25:33Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:25:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0425 12:34:27.248913    5918 request.go:629] Waited for 194.41688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:27.248983    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m03
	I0425 12:34:27.248988    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:27.248994    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:27.248998    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:27.250755    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:27.250764    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:27.250768    5918 round_trippers.go:580]     Audit-Id: 268d0669-a0bb-4a44-8df5-3042ad731a71
	I0425 12:34:27.250780    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:27.250790    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:27.250796    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:27.250802    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:27.250805    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:27.251010    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m03","uid":"14f4949f-f66f-4334-8804-36e539a4b930","resourceVersion":"1071","creationTimestamp":"2024-04-25T19:34:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_34_18_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:34:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3642 chars]
	I0425 12:34:27.251184    5918 pod_ready.go:92] pod "kube-proxy-d8zc5" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:27.251192    5918 pod_ready.go:81] duration metric: took 381.167428ms for pod "kube-proxy-d8zc5" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:27.251199    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:27.448963    5918 request.go:629] Waited for 197.713618ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:34:27.449061    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmspl
	I0425 12:34:27.449071    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:27.449083    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:27.449093    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:27.451626    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:27.451643    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:27.451658    5918 round_trippers.go:580]     Audit-Id: aa4f56cf-5d9b-4860-a73c-d115540e844f
	I0425 12:34:27.451668    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:27.451672    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:27.451676    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:27.451680    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:27.451685    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:27.451773    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmspl","generateName":"kube-proxy-","namespace":"kube-system","uid":"b0f6c7c8-ef54-4c63-9de2-05e01ace3e15","resourceVersion":"842","creationTimestamp":"2024-04-25T19:24:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0425 12:34:27.649950    5918 request.go:629] Waited for 197.825025ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:27.650048    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:27.650058    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:27.650069    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:27.650077    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:27.652287    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:27.652300    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:27.652307    5918 round_trippers.go:580]     Audit-Id: fda8e68b-d147-4f8e-8e6b-40e34cc0e0c6
	I0425 12:34:27.652312    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:27.652317    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:27.652322    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:27.652327    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:27.652330    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:27 GMT
	I0425 12:34:27.652466    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:34:27.652751    5918 pod_ready.go:92] pod "kube-proxy-gmspl" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:27.652764    5918 pod_ready.go:81] duration metric: took 401.548033ms for pod "kube-proxy-gmspl" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:27.652773    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:27.849614    5918 request.go:629] Waited for 196.743325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:34:27.849670    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mp7qm
	I0425 12:34:27.849681    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:27.849692    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:27.849698    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:27.851907    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:27.851923    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:27.851934    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:27.851958    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:27.851974    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:28 GMT
	I0425 12:34:27.851984    5918 round_trippers.go:580]     Audit-Id: 2b9cbb44-80d1-448a-9e25-1d1abe7c41ad
	I0425 12:34:27.852004    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:27.852012    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:27.852098    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mp7qm","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc106198-3317-44e2-b1a7-cc5eac6dcadc","resourceVersion":"973","creationTimestamp":"2024-04-25T19:24:51Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"49e77322-1a50-44c2-893c-6d075456cce1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"49e77322-1a50-44c2-893c-6d075456cce1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0425 12:34:28.049049    5918 request.go:629] Waited for 196.591922ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:34:28.049117    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000-m02
	I0425 12:34:28.049126    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:28.049139    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:28.049149    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:28.051982    5918 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 12:34:28.052004    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:28.052012    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:28 GMT
	I0425 12:34:28.052017    5918 round_trippers.go:580]     Audit-Id: 652d25f3-c1a4-4bbf-8e86-bc1167f19fc5
	I0425 12:34:28.052022    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:28.052026    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:28.052031    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:28.052035    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:28.052180    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000-m02","uid":"f17dfbe6-22b9-442b-b838-0b8c93835a05","resourceVersion":"994","creationTimestamp":"2024-04-25T19:33:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_25T12_33_53_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:33:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3810 chars]
	I0425 12:34:28.052401    5918 pod_ready.go:92] pod "kube-proxy-mp7qm" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:28.052413    5918 pod_ready.go:81] duration metric: took 399.620658ms for pod "kube-proxy-mp7qm" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:28.052456    5918 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:28.249485    5918 request.go:629] Waited for 196.95022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:34:28.249539    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-034000
	I0425 12:34:28.249546    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:28.249554    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:28.249560    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:28.251535    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:28.251544    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:28.251549    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:28.251554    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:28.251556    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:28.251560    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:28.251562    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:28 GMT
	I0425 12:34:28.251565    5918 round_trippers.go:580]     Audit-Id: 3f3e9b1e-6b86-4ef5-88cb-6312b7d3ea53
	I0425 12:34:28.251700    5918 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-034000","namespace":"kube-system","uid":"889fb9d4-d8d9-4a92-be22-d0ab1518bc93","resourceVersion":"870","creationTimestamp":"2024-04-25T19:24:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.mirror":"9a729f77a28640b9fa006c14e6acbd43","kubernetes.io/config.seen":"2024-04-25T19:24:03.349969029Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-25T19:24:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0425 12:34:28.449032    5918 request.go:629] Waited for 197.08721ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:28.449094    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-034000
	I0425 12:34:28.449105    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:28.449113    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:28.449120    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:28.450783    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:28.450793    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:28.450801    5918 round_trippers.go:580]     Audit-Id: 34ab9a50-2f8e-47f4-8fc7-becf78ed7488
	I0425 12:34:28.450805    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:28.450809    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:28.450816    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:28.450820    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:28.450822    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:28 GMT
	I0425 12:34:28.451072    5918 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-25T19:24:01Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0425 12:34:28.451268    5918 pod_ready.go:92] pod "kube-scheduler-multinode-034000" in "kube-system" namespace has status "Ready":"True"
	I0425 12:34:28.451277    5918 pod_ready.go:81] duration metric: took 398.79872ms for pod "kube-scheduler-multinode-034000" in "kube-system" namespace to be "Ready" ...
	I0425 12:34:28.451291    5918 pod_ready.go:38] duration metric: took 1.600457423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 12:34:28.451302    5918 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 12:34:28.451355    5918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:34:28.462019    5918 system_svc.go:56] duration metric: took 10.713402ms WaitForService to wait for kubelet
	I0425 12:34:28.462033    5918 kubeadm.go:576] duration metric: took 9.813135451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 12:34:28.462045    5918 node_conditions.go:102] verifying NodePressure condition ...
	I0425 12:34:28.649296    5918 request.go:629] Waited for 187.20791ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0425 12:34:28.649362    5918 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0425 12:34:28.649367    5918 round_trippers.go:469] Request Headers:
	I0425 12:34:28.649373    5918 round_trippers.go:473]     Accept: application/json, */*
	I0425 12:34:28.649378    5918 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0425 12:34:28.651139    5918 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 12:34:28.651150    5918 round_trippers.go:577] Response Headers:
	I0425 12:34:28.651157    5918 round_trippers.go:580]     Audit-Id: 77037218-63bf-4ba1-93c1-a4fa15cd3493
	I0425 12:34:28.651160    5918 round_trippers.go:580]     Cache-Control: no-cache, private
	I0425 12:34:28.651163    5918 round_trippers.go:580]     Content-Type: application/json
	I0425 12:34:28.651166    5918 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8487d9cd-63c9-4897-811f-1cc6ab2aeb61
	I0425 12:34:28.651170    5918 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cb627848-0775-464d-8380-ebceb438b586
	I0425 12:34:28.651172    5918 round_trippers.go:580]     Date: Thu, 25 Apr 2024 19:34:28 GMT
	I0425 12:34:28.651445    5918 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1072"},"items":[{"metadata":{"name":"multinode-034000","uid":"e94fffd9-f099-47e0-a472-978351bd93dd","resourceVersion":"855","creationTimestamp":"2024-04-25T19:24:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-034000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9b1f143bb00c241dc73ba7b698e8f6c1855732d7","minikube.k8s.io/name":"multinode-034000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_25T12_24_04_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 14663 chars]
	I0425 12:34:28.651842    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:34:28.651855    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:34:28.651861    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:34:28.651865    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:34:28.651868    5918 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 12:34:28.651871    5918 node_conditions.go:123] node cpu capacity is 2
	I0425 12:34:28.651874    5918 node_conditions.go:105] duration metric: took 189.820074ms to run NodePressure ...
	I0425 12:34:28.651882    5918 start.go:240] waiting for startup goroutines ...
	I0425 12:34:28.651896    5918 start.go:254] writing updated cluster config ...
	I0425 12:34:28.652251    5918 ssh_runner.go:195] Run: rm -f paused
	I0425 12:34:28.695074    5918 start.go:600] kubectl: 1.29.2, cluster: 1.30.0 (minor skew: 1)
	I0425 12:34:28.718544    5918 out.go:177] * Done! kubectl is now configured to use "multinode-034000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567141683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567152850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567301041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567792943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567855556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567866437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.567924955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 cri-dockerd[1088]: time="2024-04-25T19:32:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/813d5d39d3668deee22ccc300b3e4a23ea960c4bb24ee687b662ee42e6573bc6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 25 19:32:57 multinode-034000 cri-dockerd[1088]: time="2024-04-25T19:32:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b1f0ec17a321cacce8fd1fdb11fb60730697f7da90cf4fcfaa01785639326ce0/resolv.conf as [nameserver 192.169.0.1]"
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.840601471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.840685269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.840707881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.841052656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.894965563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.895209491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.895413516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:32:57 multinode-034000 dockerd[883]: time="2024-04-25T19:32:57.895634305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:33:20 multinode-034000 dockerd[877]: time="2024-04-25T19:33:20.442912151Z" level=info msg="ignoring event" container=33e20d87670fe82b4263430f034c98fae9c8586ea371974a73b1dcf6506f3090 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 25 19:33:20 multinode-034000 dockerd[883]: time="2024-04-25T19:33:20.443371113Z" level=info msg="shim disconnected" id=33e20d87670fe82b4263430f034c98fae9c8586ea371974a73b1dcf6506f3090 namespace=moby
	Apr 25 19:33:20 multinode-034000 dockerd[883]: time="2024-04-25T19:33:20.443409537Z" level=warning msg="cleaning up after shim disconnected" id=33e20d87670fe82b4263430f034c98fae9c8586ea371974a73b1dcf6506f3090 namespace=moby
	Apr 25 19:33:20 multinode-034000 dockerd[883]: time="2024-04-25T19:33:20.443416433Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 25 19:33:33 multinode-034000 dockerd[883]: time="2024-04-25T19:33:33.733247086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 25 19:33:33 multinode-034000 dockerd[883]: time="2024-04-25T19:33:33.733312177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 25 19:33:33 multinode-034000 dockerd[883]: time="2024-04-25T19:33:33.733325515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 25 19:33:33 multinode-034000 dockerd[883]: time="2024-04-25T19:33:33.733403600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d3d21e0e685ca       6e38f40d628db                                                                                         57 seconds ago       Running             storage-provisioner       2                   c5d0933caa826       storage-provisioner
	eb2f928ea5770       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   b1f0ec17a321c       coredns-7db6d8ff4d-w5z5l
	9e24a102cf114       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   813d5d39d3668       busybox-fc5497c4f-hkq6z
	272fc7ee8c40a       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   bc2c0f6edc107       kindnet-7ktv2
	33e20d87670fe       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   c5d0933caa826       storage-provisioner
	0cfcda52bebec       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   690ecc66a737a       kube-proxy-gmspl
	402e6c6625b79       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   a6f24c6336a3f       etcd-multinode-034000
	28358bd9d569f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            1                   a6281e79b7ee8       kube-apiserver-multinode-034000
	dd484af068ceb       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   fc98155f47fc8       kube-controller-manager-multinode-034000
	05179202658cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   56080574f29ed       kube-scheduler-multinode-034000
	223e2c1e65bef       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago        Exited              busybox                   0                   716dc468b6bee       busybox-fc5497c4f-hkq6z
	d1a679398f5dd       cbb01a7bd410d                                                                                         10 minutes ago       Exited              coredns                   0                   6ceafd789a01e       coredns-7db6d8ff4d-w5z5l
	6bbf310089edb       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              10 minutes ago       Exited              kindnet-cni               0                   f9e2b9879728a       kindnet-7ktv2
	573591286ee6c       a0bf559e280cf                                                                                         10 minutes ago       Exited              kube-proxy                0                   2dd0e2cf1dfae       kube-proxy-gmspl
	03ce4bf6442bc       3861cfcd7c04c                                                                                         10 minutes ago       Exited              etcd                      0                   605cbae9d65dd       etcd-multinode-034000
	a3c296be16a7a       259c8277fcbbc                                                                                         10 minutes ago       Exited              kube-scheduler            0                   351cbbeac7eae       kube-scheduler-multinode-034000
	5d1799046a89c       c7aad43836fa5                                                                                         10 minutes ago       Exited              kube-controller-manager   0                   f6c4a60e9a529       kube-controller-manager-multinode-034000
	691ca6c89d9a0       c42f13656d0b2                                                                                         10 minutes ago       Exited              kube-apiserver            0                   46ac0a1db04a9       kube-apiserver-multinode-034000
	
	
	==> coredns [d1a679398f5d] <==
	[INFO] 10.244.1.2:44610 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063983s
	[INFO] 10.244.1.2:53825 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074155s
	[INFO] 10.244.1.2:56114 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000041609s
	[INFO] 10.244.1.2:59228 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055104s
	[INFO] 10.244.1.2:45598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217219s
	[INFO] 10.244.1.2:48480 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079958s
	[INFO] 10.244.1.2:34039 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057072s
	[INFO] 10.244.0.3:51389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110967s
	[INFO] 10.244.0.3:49029 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057828s
	[INFO] 10.244.0.3:49200 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081092s
	[INFO] 10.244.0.3:45787 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090854s
	[INFO] 10.244.1.2:44075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109399s
	[INFO] 10.244.1.2:43218 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076398s
	[INFO] 10.244.1.2:48145 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077478s
	[INFO] 10.244.1.2:46778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094212s
	[INFO] 10.244.0.3:42740 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096877s
	[INFO] 10.244.0.3:39459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006717s
	[INFO] 10.244.0.3:44705 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104604s
	[INFO] 10.244.0.3:45646 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000094006s
	[INFO] 10.244.1.2:52301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078287s
	[INFO] 10.244.1.2:41666 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089165s
	[INFO] 10.244.1.2:45151 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063309s
	[INFO] 10.244.1.2:53549 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000056035s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb2f928ea577] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50722 - 21655 "HINFO IN 2838478286617782151.8994208134258240629. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.003922078s
	
	
	==> describe nodes <==
	Name:               multinode-034000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-034000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-034000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T12_24_04_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:24:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-034000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:34:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:32:53 +0000   Thu, 25 Apr 2024 19:24:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:32:53 +0000   Thu, 25 Apr 2024 19:24:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:32:53 +0000   Thu, 25 Apr 2024 19:24:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:32:53 +0000   Thu, 25 Apr 2024 19:32:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.16
	  Hostname:    multinode-034000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 80da971b3d3e4b3d9393cb43062f0630
	  System UUID:                e4584236-0000-0000-8047-fdddf635d073
	  Boot ID:                    84c5764f-ffd8-4b98-a3b7-d8be8544136e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkq6z                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                 coredns-7db6d8ff4d-w5z5l                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-034000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-7ktv2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-034000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-034000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-gmspl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-034000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-034000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-034000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-034000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-034000 event: Registered Node multinode-034000 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-034000 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-034000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-034000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-034000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                  node-controller  Node multinode-034000 event: Registered Node multinode-034000 in Controller
	
	
	Name:               multinode-034000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-034000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-034000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T12_33_53_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:33:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-034000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:34:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:33:57 +0000   Thu, 25 Apr 2024 19:33:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:33:57 +0000   Thu, 25 Apr 2024 19:33:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:33:57 +0000   Thu, 25 Apr 2024 19:33:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:33:57 +0000   Thu, 25 Apr 2024 19:33:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.17
	  Hostname:    multinode-034000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 284e83a4e1bc45f284b4b8f3049fcb39
	  System UUID:                94b448d5-0000-0000-b8c4-70380b6d3376
	  Boot ID:                    f90e718c-d66f-42bb-993f-e306c64c5ae9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-p4498    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kindnet-gmxwj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m39s
	  kube-system                 kube-proxy-mp7qm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 36s                    kube-proxy       
	  Normal  Starting                 9m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m39s (x2 over 9m39s)  kubelet          Node multinode-034000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x2 over 9m39s)  kubelet          Node multinode-034000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x2 over 9m39s)  kubelet          Node multinode-034000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m26s                  kubelet          Node multinode-034000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  38s (x2 over 38s)      kubelet          Node multinode-034000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x2 over 38s)      kubelet          Node multinode-034000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x2 over 38s)      kubelet          Node multinode-034000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-034000-m02 event: Registered Node multinode-034000-m02 in Controller
	  Normal  NodeReady                33s                    kubelet          Node multinode-034000-m02 status is now: NodeReady
	
	
	Name:               multinode-034000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-034000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-034000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T12_34_18_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:34:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-034000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:34:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:34:26 +0000   Thu, 25 Apr 2024 19:34:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:34:26 +0000   Thu, 25 Apr 2024 19:34:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:34:26 +0000   Thu, 25 Apr 2024 19:34:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:34:26 +0000   Thu, 25 Apr 2024 19:34:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.18
	  Hostname:    multinode-034000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fb53ecebe4b43e08d9106a35dcefff4
	  System UUID:                c8494b42-0000-0000-86e6-91828949bf04
	  Boot ID:                    c59332ad-2dd4-418c-98e1-90676f9cacbe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-spsv9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m57s
	  kube-system                 kube-proxy-d8zc5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m51s                  kube-proxy  
	  Normal  Starting                 9s                     kube-proxy  
	  Normal  NodeHasSufficientMemory  8m58s (x2 over 8m58s)  kubelet     Node multinode-034000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x2 over 8m58s)  kubelet     Node multinode-034000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x2 over 8m58s)  kubelet     Node multinode-034000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m45s                  kubelet     Node multinode-034000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-034000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-034000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-034000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-034000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +5.360269] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007126] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.620568] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +2.242478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.493298] systemd-fstab-generator[475]: Ignoring "noauto" option for root device
	[  +0.104888] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.221765] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.637157] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.244925] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.107647] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.100612] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +2.422427] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.111684] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.109868] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +0.138167] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.397492] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +1.513249] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.051480] kauditd_printk_skb: 266 callbacks suppressed
	[  +5.040518] kauditd_printk_skb: 92 callbacks suppressed
	[  +2.829705] systemd-fstab-generator[2134]: Ignoring "noauto" option for root device
	[  +4.204000] kauditd_printk_skb: 40 callbacks suppressed
	[Apr25 19:33] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [03ce4bf6442b] <==
	{"level":"info","ts":"2024-04-25T19:23:59.622594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.622602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.622609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.622614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-04-25T19:23:59.628448Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.628761Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-034000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:23:59.629135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:23:59.629625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:23:59.631785Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:23:59.63597Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:23:59.63284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T19:23:59.635617Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.648343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.650527Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:23:59.663353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	{"level":"info","ts":"2024-04-25T19:28:21.307323Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-25T19:28:21.307345Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-034000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"]}
	{"level":"warn","ts":"2024-04-25T19:28:21.307437Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:28:21.307482Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:28:21.329415Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:28:21.329496Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-25T19:28:21.329657Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1249487c082462aa","current-leader-member-id":"1249487c082462aa"}
	{"level":"info","ts":"2024-04-25T19:28:21.330888Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-25T19:28:21.331017Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-25T19:28:21.331029Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-034000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"]}
	
	
	==> etcd [402e6c6625b7] <==
	{"level":"info","ts":"2024-04-25T19:32:46.715951Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:32:46.716732Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:32:46.717074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa switched to configuration voters=(1317664063532327594)"}
	{"level":"info","ts":"2024-04-25T19:32:46.720642Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","added-peer-id":"1249487c082462aa","added-peer-peer-urls":["https://192.169.0.16:2380"]}
	{"level":"info","ts":"2024-04-25T19:32:46.720876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:32:46.721246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:32:46.724143Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:32:46.726448Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1249487c082462aa","initial-advertise-peer-urls":["https://192.169.0.16:2380"],"listen-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:32:46.726484Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:32:46.72661Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-25T19:32:46.726638Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-25T19:32:48.281478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T19:32:48.281523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:32:48.281564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgPreVoteResp from 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-04-25T19:32:48.281584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T19:32:48.281593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-04-25T19:32:48.281604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 3"}
	{"level":"info","ts":"2024-04-25T19:32:48.281655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-04-25T19:32:48.283534Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-034000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:32:48.283982Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:32:48.284247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:32:48.284321Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:32:48.284342Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:32:48.28652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	{"level":"info","ts":"2024-04-25T19:32:48.28654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:34:31 up 2 min,  0 users,  load average: 0.88, 0.33, 0.12
	Linux multinode-034000 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [272fc7ee8c40] <==
	I0425 19:33:41.310255       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:33:51.313801       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:33:51.313910       1 main.go:227] handling current node
	I0425 19:33:51.313934       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:33:51.314085       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:33:51.314326       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:33:51.314408       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:34:01.317940       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:34:01.318221       1 main.go:227] handling current node
	I0425 19:34:01.318314       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:34:01.318397       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:34:01.318579       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:34:01.318662       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:34:11.331930       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:34:11.332104       1 main.go:227] handling current node
	I0425 19:34:11.332154       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:34:11.332170       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:34:11.332409       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:34:11.332515       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:34:21.342379       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:34:21.342510       1 main.go:227] handling current node
	I0425 19:34:21.342557       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:34:21.342594       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:34:21.342776       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:34:21.342866       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [6bbf310089ed] <==
	I0425 19:27:41.205045       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:27:51.208736       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:27:51.208887       1 main.go:227] handling current node
	I0425 19:27:51.208934       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:27:51.208952       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:27:51.209107       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:27:51.209191       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:28:01.217780       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:28:01.217852       1 main.go:227] handling current node
	I0425 19:28:01.217870       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:28:01.217883       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:01.217991       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:28:01.218048       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:28:11.228854       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:28:11.228981       1 main.go:227] handling current node
	I0425 19:28:11.229043       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:28:11.229082       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:11.229191       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:28:11.229251       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	I0425 19:28:21.243437       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0425 19:28:21.243453       1 main.go:227] handling current node
	I0425 19:28:21.243461       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0425 19:28:21.243464       1 main.go:250] Node multinode-034000-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:21.243523       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0425 19:28:21.243528       1 main.go:250] Node multinode-034000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [28358bd9d569] <==
	I0425 19:32:49.218789       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:32:49.218837       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:32:49.219153       1 policy_source.go:224] refreshing policies
	I0425 19:32:49.224832       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:32:49.225000       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:32:49.225270       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:32:49.227270       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:32:49.227382       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:32:49.228039       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:32:49.231551       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0425 19:32:49.235505       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0425 19:32:49.240369       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:32:49.240502       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:32:49.240597       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:32:49.240645       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:32:49.240713       1 cache.go:39] Caches are synced for autoregister controller
	I0425 19:32:49.266743       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:32:50.128828       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:32:51.299381       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:32:51.387973       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:32:51.395559       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:32:51.431587       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:32:51.435980       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:33:02.004015       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:33:02.205084       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [691ca6c89d9a] <==
	W0425 19:28:21.324821       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.324871       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.324908       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325001       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325031       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325086       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325140       1 logging.go:59] [core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325167       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325219       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325246       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325299       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325326       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325378       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325405       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325457       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325487       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325543       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325572       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325623       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325650       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325710       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325738       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325791       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325845       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:28:21.325870       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5d1799046a89] <==
	I0425 19:24:17.647018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.324µs"
	I0425 19:24:26.000137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.041µs"
	I0425 19:24:26.020051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.843µs"
	I0425 19:24:26.260842       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0425 19:24:27.653013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.155µs"
	I0425 19:24:27.668905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.499254ms"
	I0425 19:24:27.670147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.91µs"
	I0425 19:24:51.152497       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-034000-m02\" does not exist"
	I0425 19:24:51.159771       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-034000-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:24:51.265242       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-034000-m02"
	I0425 19:25:04.097706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:25:06.271745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.049102ms"
	I0425 19:25:06.282882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.462574ms"
	I0425 19:25:06.289779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.115163ms"
	I0425 19:25:06.290043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.37µs"
	I0425 19:25:08.175257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.022296ms"
	I0425 19:25:08.175734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.517µs"
	I0425 19:25:08.873029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.132021ms"
	I0425 19:25:08.873366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.969µs"
	I0425 19:25:33.210402       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:25:33.211453       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-034000-m03\" does not exist"
	I0425 19:25:33.227490       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-034000-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:25:36.286256       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-034000-m03"
	I0425 19:25:46.185093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:26:36.307835       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	
	
	==> kube-controller-manager [dd484af068ce] <==
	I0425 19:33:02.579978       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0425 19:33:02.585540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:33:42.013468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.056443ms"
	I0425 19:33:42.013524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.713µs"
	I0425 19:33:48.511498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.058292ms"
	I0425 19:33:48.511660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.406µs"
	I0425 19:33:48.518154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.207268ms"
	I0425 19:33:48.518304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.986µs"
	I0425 19:33:52.684321       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-034000-m02\" does not exist"
	I0425 19:33:52.693680       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-034000-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:33:53.574789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.849µs"
	I0425 19:33:57.593729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:33:57.603332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.136µs"
	I0425 19:34:05.558161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.912µs"
	I0425 19:34:05.611391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.901µs"
	I0425 19:34:05.612826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.162µs"
	I0425 19:34:06.373213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.448µs"
	I0425 19:34:06.380571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.804µs"
	I0425 19:34:07.626721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.890616ms"
	I0425 19:34:07.627159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.99µs"
	I0425 19:34:17.669523       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:34:18.474359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	I0425 19:34:18.474356       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-034000-m03\" does not exist"
	I0425 19:34:18.479757       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-034000-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:34:26.714076       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-034000-m02"
	
	
	==> kube-proxy [0cfcda52bebe] <==
	I0425 19:32:50.527267       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:32:50.538403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0425 19:32:50.582194       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:32:50.582272       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:32:50.582286       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:32:50.586702       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:32:50.587286       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:32:50.587332       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:32:50.589179       1 config.go:192] "Starting service config controller"
	I0425 19:32:50.589430       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:32:50.589518       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:32:50.589524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:32:50.590292       1 config.go:319] "Starting node config controller"
	I0425 19:32:50.590363       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:32:50.690247       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:32:50.690462       1 shared_informer.go:320] Caches are synced for node config
	I0425 19:32:50.690489       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [573591286ee6] <==
	I0425 19:24:18.434388       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:24:18.449968       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0425 19:24:18.483080       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:24:18.483210       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:24:18.483240       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:24:18.485629       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:24:18.485827       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:24:18.485857       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:24:18.486807       1 config.go:192] "Starting service config controller"
	I0425 19:24:18.487223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:24:18.487260       1 config.go:319] "Starting node config controller"
	I0425 19:24:18.487265       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:24:18.487555       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:24:18.487584       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:24:18.588292       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:24:18.588509       1 shared_informer.go:320] Caches are synced for node config
	I0425 19:24:18.588519       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [05179202658c] <==
	I0425 19:32:47.421545       1 serving.go:380] Generated self-signed cert in-memory
	W0425 19:32:49.163304       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 19:32:49.163466       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:32:49.163491       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 19:32:49.163505       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 19:32:49.205173       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 19:32:49.205208       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:32:49.210941       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 19:32:49.211330       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 19:32:49.211371       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 19:32:49.211381       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 19:32:49.312820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a3c296be16a7] <==
	E0425 19:24:01.101806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0425 19:24:01.100765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:24:01.101848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:24:01.100501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:24:01.102094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:24:01.101495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:01.102106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:01.906317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:24:01.906382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:24:02.005981       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:24:02.006244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:24:02.058501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:24:02.058581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:24:02.073051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:24:02.073130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:24:02.105993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:02.106104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:02.131193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:02.131270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:02.150710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:24:02.150900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:24:02.256463       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:24:02.256557       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0425 19:24:05.393089       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0425 19:28:21.230718       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:32:50 multinode-034000 kubelet[1331]: E0425 19:32:50.370098    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt podName:06d559a6-e84a-4f26-8980-b56fefca9346 nodeName:}" failed. No retries permitted until 2024-04-25 19:32:51.370087608 +0000 UTC m=+5.801416583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rwkqt" (UniqueName: "kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt") pod "busybox-fc5497c4f-hkq6z" (UID: "06d559a6-e84a-4f26-8980-b56fefca9346") : object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.275399    1331 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.275533    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21ddb5bc-fcf1-4ec4-9fdb-8595d406b302-config-volume podName:21ddb5bc-fcf1-4ec4-9fdb-8595d406b302 nodeName:}" failed. No retries permitted until 2024-04-25 19:32:53.275521704 +0000 UTC m=+7.706850681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/21ddb5bc-fcf1-4ec4-9fdb-8595d406b302-config-volume") pod "coredns-7db6d8ff4d-w5z5l" (UID: "21ddb5bc-fcf1-4ec4-9fdb-8595d406b302") : object "kube-system"/"coredns" not registered
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.376286    1331 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.376312    1331 projected.go:200] Error preparing data for projected volume kube-api-access-rwkqt for pod default/busybox-fc5497c4f-hkq6z: object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.376348    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt podName:06d559a6-e84a-4f26-8980-b56fefca9346 nodeName:}" failed. No retries permitted until 2024-04-25 19:32:53.376338977 +0000 UTC m=+7.807667953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rwkqt" (UniqueName: "kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt") pod "busybox-fc5497c4f-hkq6z" (UID: "06d559a6-e84a-4f26-8980-b56fefca9346") : object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.692453    1331 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-hkq6z" podUID="06d559a6-e84a-4f26-8980-b56fefca9346"
	Apr 25 19:32:51 multinode-034000 kubelet[1331]: E0425 19:32:51.693401    1331 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-w5z5l" podUID="21ddb5bc-fcf1-4ec4-9fdb-8595d406b302"
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.289179    1331 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.289915    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21ddb5bc-fcf1-4ec4-9fdb-8595d406b302-config-volume podName:21ddb5bc-fcf1-4ec4-9fdb-8595d406b302 nodeName:}" failed. No retries permitted until 2024-04-25 19:32:57.289891685 +0000 UTC m=+11.721220677 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/21ddb5bc-fcf1-4ec4-9fdb-8595d406b302-config-volume") pod "coredns-7db6d8ff4d-w5z5l" (UID: "21ddb5bc-fcf1-4ec4-9fdb-8595d406b302") : object "kube-system"/"coredns" not registered
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.389499    1331 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.389555    1331 projected.go:200] Error preparing data for projected volume kube-api-access-rwkqt for pod default/busybox-fc5497c4f-hkq6z: object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.389634    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt podName:06d559a6-e84a-4f26-8980-b56fefca9346 nodeName:}" failed. No retries permitted until 2024-04-25 19:32:57.389623696 +0000 UTC m=+11.820952671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rwkqt" (UniqueName: "kubernetes.io/projected/06d559a6-e84a-4f26-8980-b56fefca9346-kube-api-access-rwkqt") pod "busybox-fc5497c4f-hkq6z" (UID: "06d559a6-e84a-4f26-8980-b56fefca9346") : object "default"/"kube-root-ca.crt" not registered
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.691390    1331 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-hkq6z" podUID="06d559a6-e84a-4f26-8980-b56fefca9346"
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: E0425 19:32:53.691814    1331 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-w5z5l" podUID="21ddb5bc-fcf1-4ec4-9fdb-8595d406b302"
	Apr 25 19:32:53 multinode-034000 kubelet[1331]: I0425 19:32:53.792278    1331 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 25 19:33:21 multinode-034000 kubelet[1331]: I0425 19:33:21.166579    1331 scope.go:117] "RemoveContainer" containerID="5a723e5001a43994715a4859df72f8530969e1abfcde368cebc6d4ff7aee0bff"
	Apr 25 19:33:21 multinode-034000 kubelet[1331]: I0425 19:33:21.167369    1331 scope.go:117] "RemoveContainer" containerID="33e20d87670fe82b4263430f034c98fae9c8586ea371974a73b1dcf6506f3090"
	Apr 25 19:33:21 multinode-034000 kubelet[1331]: E0425 19:33:21.167914    1331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(89c78c52-dabe-4a5b-ac3b-0209ccb11139)\"" pod="kube-system/storage-provisioner" podUID="89c78c52-dabe-4a5b-ac3b-0209ccb11139"
	Apr 25 19:33:33 multinode-034000 kubelet[1331]: I0425 19:33:33.692537    1331 scope.go:117] "RemoveContainer" containerID="33e20d87670fe82b4263430f034c98fae9c8586ea371974a73b1dcf6506f3090"
	Apr 25 19:33:45 multinode-034000 kubelet[1331]: E0425 19:33:45.707098    1331 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:33:45 multinode-034000 kubelet[1331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:33:45 multinode-034000 kubelet[1331]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:33:45 multinode-034000 kubelet[1331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:33:45 multinode-034000 kubelet[1331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-034000 -n multinode-034000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-034000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (128.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7201.362s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0425 13:22:26.274174    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 13:22:29.161969    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/old-k8s-version-964000/client.crt: no such file or directory
E0425 13:22:32.410655    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/enable-default-cni-426000/client.crt: no such file or directory
E0425 13:22:39.005299    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/custom-flannel-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:22:55.891403    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/false-426000/client.crt: no such file or directory
E0425 13:23:00.860149    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 13:23:02.285667    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/bridge-426000/client.crt: no such file or directory
E0425 13:23:12.708258    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/kubenet-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:23:34.317777    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:23:57.377131    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/auto-426000/client.crt: no such file or directory
E0425 13:24:10.850422    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/kindnet-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:24:26.225662    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/flannel-426000/client.crt: no such file or directory
E0425 13:24:35.773632    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/kubenet-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:24:57.816977    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 13:25:17.796444    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/no-preload-015000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:25:47.396661    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/calico-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:26:09.385755    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/enable-default-cni-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:26:39.248567    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/bridge-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:27:09.376233    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:27:26.318609    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 13:27:29.199209    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/old-k8s-version-964000/client.crt: no such file or directory
E0425 13:27:39.029344    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/custom-flannel-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:27:55.905478    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/false-426000/client.crt: no such file or directory
E0425 13:28:12.719686    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/kubenet-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:28:52.257366    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/old-k8s-version-964000/client.crt: no such file or directory
E0425 13:28:57.386312    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/auto-426000/client.crt: no such file or directory
E0425 13:29:10.861326    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/kindnet-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:29:26.233710    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/flannel-426000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
E0425 13:29:57.825100    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 13:30:17.806983    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/no-preload-015000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.47:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.47:8444: i/o timeout
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (45m15s)
	TestNetworkPlugins/group (26m48s)
	TestStartStop (37m42s)
	TestStartStop/group/default-k8s-diff-port (19m43s)
	TestStartStop/group/default-k8s-diff-port/serial (19m43s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8m9s)
	TestStartStop/group/newest-cni (14m3s)
	TestStartStop/group/newest-cni/serial (14m3s)
	TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (41s)

                                                
                                                
goroutine 4335 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 39 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005a6d00, 0xc000993bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000810810, {0x8c74fc0, 0x2a, 0x2a}, {0x47c6aa5?, 0x62fce19?, 0x8c97d80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000a72820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000a72820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00080fb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 171 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc000112750, 0xc0009ecf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xd8?, 0xc000112750, 0xc000112798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0000747d8?, 0x6113251?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000074780?, 0xc00078faa0?, 0xc000074780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 160
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2753 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0020ee680, {0x62a4f19?, 0x0?}, 0xc002332a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020ee680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0020ee680, 0xc002736140)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2734
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3341 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000210060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3340
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 170 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a7e710, 0x2d)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002000ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a7e740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020bbbd0, {0x78ef760, 0xc0020b7d70}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0020bbbd0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 160
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 87 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 86
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1352 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002bb34a0, 0xc002c652c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 876
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3346 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3345
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3961 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3960
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2954 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2953
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2302 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00231bf50, 0xc00215ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xe0?, 0xc00231bf50, 0xc00231bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0029481a0?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00231bfd0?, 0x4880c04?, 0xc002b8a1e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2286
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3916 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022a0000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3912
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3665 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0023506d0, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c9d980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002350700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009bad50, {0x78ef760, 0xc00218ced0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009bad50, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3660
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3547 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002b0a450, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c47080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b0a480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002c5c2a0, {0x78ef760, 0xc0027f43f0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002c5c2a0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3561
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2313 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc000555310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc002d50b60, 0xc002187830)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2212
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2961 [chan receive, 35 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b4a40, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2943
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3301 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0029eaba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 160 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a7e740, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3548 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc000111f50, 0xc000111f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x70?, 0xc000111f50, 0xc000111f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc002949ba0?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002c5c020?, 0x4880c04?, 0xc000111fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3561
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 159 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002000cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 172 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 171
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3313 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3296
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2212 [chan receive, 45 minutes]:
testing.(*T).Run(0xc002d501a0, {0x62a38e7?, 0x541f0de6a97?}, 0xc002187830)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002d501a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002d501a0, 0x78e3558)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2734 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0020ee000, 0x78e3700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2332
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1261 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002bb3340, 0xc002b8af60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1260
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 747 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x505356d8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000a10500?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000a10500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000a10500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000616640)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000616640)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000a5a0f0, {0x79060f0, 0xc000616640})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc000a5a0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0008ad380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 744
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2944 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002899020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2943
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2285 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002be0b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2275
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2953 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00231bf50, 0xc00231bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xe0?, 0xc00231bf50, 0xc00231bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0029481a0?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00231bfd0?, 0x4880c04?, 0xc002b8a1e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2736 [chan receive, 15 minutes]:
testing.(*T).Run(0xc0020ee340, {0x62a4f19?, 0x0?}, 0xc002333a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020ee340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0020ee340, 0xc002736100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2734
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 995 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 994
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3069 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002998de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3082
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2286 [chan receive, 45 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b86340, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2275
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3087 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc000096750, 0xc0020a5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x0?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc002948820?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0x4880c04?, 0xc002736480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3070
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3139 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002736a40, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3549 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3548
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2303 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2302
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 993 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0029f5610, 0x2b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022a0900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029f5640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021c94b0, {0x78ef760, 0xc002070c60}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021c94b0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1391 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc002006000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1409
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3088 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3087
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3138 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002000f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3302 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002350400, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3922 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3921
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2827 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2826
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4348 [IO wait]:
internal/poll.runtime_pollWait(0x505359c0, 0x77)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002332980?, 0x0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitWrite(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:93
internal/poll.(*FD).WaitWrite(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:683
net.(*netFD).connect(0xc002332980, {0x7913080, 0xc0005503f0}, {0xc002199168?, 0x4706cfb?}, {0x78ee538?, 0xc002922580?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:141 +0x70b
net.(*netFD).dial(0xc002332980, {0x7913080, 0xc0005503f0}, {0x7918510?, 0x0?}, {0x7918510, 0xc0027c5590}, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/sock_posix.go:124 +0x3bc
net.socket({0x7913080, 0xc0005503f0}, {0x62a203f, 0x3}, 0x2, 0x1, 0xc000010018?, 0x0, {0x7918510, 0x0}, ...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/sock_posix.go:70 +0x29b
net.internetSocket({0x7913080, 0xc0005503f0}, {0x62a203f, 0x3}, {0x7918510, 0x0}, {0x7918510, 0xc0027c5590}, 0x1, 0x0, ...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/ipsock_posix.go:154 +0xf8
net.(*sysDialer).doDialTCPProto(0xc0021601e0, {0x7913080, 0xc0005503f0}, 0x0, 0xc0027c5590, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:85 +0xec
net.(*sysDialer).doDialTCP(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:75
net.(*sysDialer).dialTCP(0xc0027c5590?, {0x7913080?, 0xc0005503f0?}, 0x476d93a?, 0xc002199400?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:71 +0x65
net.(*sysDialer).dialSingle(0xc0021601e0, {0x7913080, 0xc0005503f0}, {0x78fbbf8, 0xc0027c5590})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:651 +0x27d
net.(*sysDialer).dialSerial(0xc0021601e0, {0x7913080, 0xc0005503f0}, {0xc00279d2a0?, 0x1, 0xc00279d2a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:616 +0x24e
net.(*sysDialer).dialParallel(0xc00279d290?, {0x7913080?, 0xc0005503f0?}, {0xc00279d2a0?, 0xc0027c5530?, 0x62a2d26?}, {0x0?, 0x62a203f?, 0x10?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:517 +0x3b4
net.(*Dialer).DialContext(0xc0004a0af0, {0x7912fd8, 0xc0027c53b0}, {0x62a203f, 0x3}, {0xc002b46180, 0x11})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:508 +0x69a
k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0021c7570, {0x7912fd8?, 0xc0027c53b0?}, {0x62a203f?, 0x60?}, {0xc002b46180?, 0x48282f1?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/connrotation/connrotation.go:118 +0x43
net/http.(*Transport).dial(0xc0009f1a20?, {0x7912fd8?, 0xc0027c53b0?}, {0x62a203f?, 0xfffffffe009f1a28?}, {0xc002b46180?, 0x4818c7d?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1187 +0xd2
net/http.(*Transport).dialConn(0xc000540500, {0x7912fd8, 0xc0027c53b0}, {{}, 0x0, {0xc000066a00, 0x5}, {0xc002b46180, 0x11}, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1648 +0x7e8
net/http.(*Transport).dialConnFor(0xc000540500, 0xc0002054a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1485 +0xcd
created by net/http.(*Transport).queueForDial in goroutine 4298
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1449 +0x3c9

                                                
                                                
goroutine 3113 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002736a10, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002000b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002736a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000535190, {0x78ef760, 0xc0020d3c80}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000535190, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3888 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002737390, 0x15)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0029ebe60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027373c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00279c1a0, {0x78ef760, 0xc0027c4b70}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279c1a0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3917
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1390 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc002006000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1409
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3114 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00205ff50, 0xc00205ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xc0?, 0xc00205ff50, 0xc00205ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc002d509c0?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00205ffd0?, 0x4880c04?, 0xc002c65bc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3342 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b4d80, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3340
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3086 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a7e910, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002998cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a7e940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021c8e30, {0x78ef760, 0xc00234c540}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021c8e30, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3070
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2332 [chan receive, 37 minutes]:
testing.(*T).Run(0xc00206e9c0, {0x62a38e7?, 0x4839fd3?}, 0x78e3700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00206e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00206e9c0, 0x78e35a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1330 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002347ce0, 0xc002b8aae0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1316
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1086 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00275b1e0, 0xc00271cc00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1085
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 970 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029f5640, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 889
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 969 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022a0a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 889
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2301 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002b86310, 0x1b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002be0a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b86340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002d02000, {0x78ef760, 0xc002c72030}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002d02000, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2286
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 994 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc002061f50, 0xc001ff5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xd0?, 0xc002061f50, 0xc002061f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0020eeb60?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002061fd0?, 0x4880c04?, 0xc0008b5540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4298 [select]:
net/http.(*Transport).getConn(0xc000540500, 0xc002736300, {{}, 0x0, {0xc000066a00, 0x5}, {0xc002b46180, 0x11}, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1406 +0x5a5
net/http.(*Transport).roundTrip(0xc000540500, 0xc002156c60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:595 +0x73a
net/http.(*Transport).RoundTrip(0x76c95e0?, 0xc0027c54a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/roundtrip.go:17 +0x13
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0005933c0, 0xc002156b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/round_trippers.go:168 +0x326
net/http.send(0xc002156b40, {0x78f3880, 0xc0005933c0}, {0x4706c01?, 0x2c?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:259 +0x5e4
net/http.(*Client).send(0xc0022c7380, 0xc002156b40, {0x0?, 0xc002156b40?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:180 +0x98
net/http.(*Client).do(0xc0022c7380, 0xc002156b40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:724 +0x8dc
net/http.(*Client).Do(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:590
k8s.io/client-go/rest.(*Request).request(0xc002156a20, {0x7913080, 0xc000494c40}, 0xc00006ee20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/rest/request.go:1023 +0x397
k8s.io/client-go/rest.(*Request).Do(0xc002156a20, {0x7913080, 0xc000494c40})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/rest/request.go:1063 +0xc5
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0003f8f20, {0x7913080, 0xc000494c40}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x62e062a, 0x1c}, {0x0, ...}, ...})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/kubernetes/typed/core/v1/pod.go:99 +0x165
k8s.io/minikube/test/integration.PodWait.func1({0x7913080, 0xc000494c40})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:327 +0x10b
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2(0xc00006f9d0?, {0x7913080?, 0xc000494c40?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:87 +0x52
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x7913080, 0xc000494c40}, {0x79067b0, 0xc0024e03e0}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:88 +0x24d
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x7913080?, 0xc00081af50?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x7913080, 0xc00081af50}, 0xc002d51d40, {0xc0029c9960, 0x1c}, {0x62c9005, 0x14}, {0x62e062a, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x7913080, 0xc00081af50}, 0xc002d51d40, {0xc0029c9960, 0x1c}, {0x62cbe48?, 0xc00205ef60?}, {0x4839fd3?, 0x4791f2f?}, {0xc0001b3400, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002d51d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002d51d40, 0xc002332200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4109
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3115 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3114
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3070 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a7e940, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3082
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2825 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002736b10, 0x18)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002aa6ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002736b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00279cb70, {0x78ef760, 0xc002272d80}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279cb70, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2812
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2826 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00205af50, 0xc00215bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x0?, 0xc00205af50, 0xc00205af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0029481a0?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00205afd0?, 0x4880c04?, 0xc002c64600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2812
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2952 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0008b4a10, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002898de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b4a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021c6170, {0x78ef760, 0xc00208a120}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021c6170, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4333 [IO wait]:
internal/poll.runtime_pollWait(0x50535010, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002c46240?, 0xc002780a00?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002c46240, {0xc002780a00, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0028ba348, {0xc002780a00?, 0x502646c8?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022c6c60, {0x78ee178, 0xc002a74550})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x78ee2b8, 0xc0022c6c60}, {0x78ee178, 0xc002a74550}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xffffffffffffffff?, {0x78ee2b8, 0xc0022c6c60})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xe7791f701?, {0x78ee2b8?, 0xc0022c6c60?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x78ee2b8, 0xc0022c6c60}, {0x78ee238, 0xc0028ba348}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x8cfba60?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4331
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2812 [chan receive, 37 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002736b40, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2821
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3345 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00231ff50, 0xc00231ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x0?, 0xc00231ff50, 0xc00231ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc00231ffb0?, 0x4c8d858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4c8d81b?, 0xc00212f980?, 0xc002003080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2811 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002aa6c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2821
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3978 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c46f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3977
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3917 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027373c0, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3912
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3666 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00231b750, 0xc00231b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x80?, 0xc00231b750, 0xc00231b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0x10000c002948d00?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4880ba5?, 0xc00275af20?, 0xc002b8bf80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3660
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3769 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc002168750, 0xc00293df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xc0?, 0xc002168750, 0xc002168798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0020ef1e0?, 0x483a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021687d0?, 0x4880c04?, 0xc0025fd8c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3296 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc000110f50, 0xc000110f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x10?, 0xc000110f50, 0xc000110f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc000110fb0?, 0x4c8d858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4c8d81b?, 0xc002570600?, 0xc002120bd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3960 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc002169f50, 0xc00215cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x0?, 0xc002169f50, 0xc002169f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc0020ef801?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002169fd0?, 0x4880c04?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3979
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3312 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008b4c50, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022a1e60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b4d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00279de80, {0x78ef760, 0xc0020d2ab0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279de80, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3295 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0023503d0, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0029eaa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002350400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000990340, {0x78ef760, 0xc002b14210}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000990340, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3667 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3666
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3783 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b5500, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3778
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3782 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c19140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3778
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3659 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c9daa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3655
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 4163 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b0a540, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3770 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3769
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3660 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002350700, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3655
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3560 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c471a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3543
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3561 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b0a480, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3543
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3921 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00209bf50, 0xc002158f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0xe0?, 0xc00209bf50, 0xc00209bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc00216a7b0?, 0x4c8d858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00209bfd0?, 0x4880c04?, 0xc002736800?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3917
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3768 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0008b54d0, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c19020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b5500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00279c260, {0x78ef760, 0xc0027c4210}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279c260, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4275 [chan receive]:
testing.(*T).Run(0xc002d509c0, {0x62cec9e?, 0x60400000004?}, 0xc00080e800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002d509c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002d509c0, 0xc002333a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2736
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3959 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002b0a4d0, 0x15)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c46de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b0a500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00279c080, {0x78ef760, 0xc002a1a090}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279c080, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3979
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3979 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b0a500, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3977
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4158 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002b0a390, 0x3)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x73dd3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002c18720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b0a540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009ba520, {0x78ef760, 0xc000974450}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009ba520, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4334 [select]:
os/exec.(*Cmd).watchCtx(0xc0023471e0, 0xc0021e2ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4331
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 4331 [syscall]:
syscall.syscall6(0xc0022c7f80?, 0x1000000000010?, 0x10000000019?, 0x505103f8?, 0x90?, 0x95b15b8?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc002197a50?, 0x47070a5?, 0x90?, 0x7850140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x4837c45?, 0xc002197a84, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002908c00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0023471e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0023471e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002948d00, 0xc0023471e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateEnableAddonWhileActive({0x7913080, 0xc000530000}, 0xc002948d00, {0xc002773cc8, 0x11}, {0x62ae837, 0xa}, {0x4839fd3?, 0x4791f2f?}, {0xc0020b1300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:205 +0x1d5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002948d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002948d00, 0xc00080e800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4275
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4160 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4159
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4349 [select]:
net.(*netFD).connect.func2()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:118 +0x7a
created by net.(*netFD).connect in goroutine 4348
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:117 +0x37c

                                                
                                                
goroutine 4332 [IO wait]:
internal/poll.runtime_pollWait(0x50534d28, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002c46180?, 0xc002780800?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002c46180, {0xc002780800, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0028ba330, {0xc002780800?, 0xc00209be78?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022c6c30, {0x78ee178, 0xc002a74548})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x78ee2b8, 0xc0022c6c30}, {0x78ee178, 0xc002a74548}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x8ba9860?, {0x78ee2b8, 0xc0022c6c30})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x62a4f1f?, {0x78ee2b8?, 0xc0022c6c30?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x78ee2b8, 0xc0022c6c30}, {0x78ee238, 0xc0028ba330}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00080e800?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4331
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 4162 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002c18840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 4154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 4159 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7913240, 0xc0000582a0}, 0xc00209af50, 0xc00215df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x7913240, 0xc0000582a0}, 0x11?, 0xc00209af50, 0xc00209af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x7913240?, 0xc0000582a0?}, 0xc00209afb0?, 0x4c8d858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00209afd0?, 0x4880c04?, 0xc00080f600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4109 [chan receive, 9 minutes]:
testing.(*T).Run(0xc0020efa00, {0x62cecb4?, 0x60400000004?}, 0xc002332200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0020efa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0020efa00, 0xc002332a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2753
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (200/227)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.45
9 TestDownloadOnly/v1.20.0/DeleteAll 0.39
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 13.11
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.32
18 TestDownloadOnly/v1.30.0/DeleteAll 0.39
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
21 TestBinaryMirror 1.01
22 TestOffline 62.25
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 141.82
29 TestAddons/parallel/Registry 15.39
30 TestAddons/parallel/Ingress 19.36
31 TestAddons/parallel/InspektorGadget 10.6
32 TestAddons/parallel/MetricsServer 5.52
33 TestAddons/parallel/HelmTiller 10
35 TestAddons/parallel/CSI 82.98
36 TestAddons/parallel/Headlamp 11.92
37 TestAddons/parallel/CloudSpanner 5.36
38 TestAddons/parallel/LocalPath 53.44
39 TestAddons/parallel/NvidiaDevicePlugin 5.33
40 TestAddons/parallel/Yakd 5
43 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/StoppedEnableDisable 5.97
45 TestCertOptions 42.41
46 TestCertExpiration 252.84
47 TestDockerFlags 50.43
48 TestForceSystemdFlag 41.7
49 TestForceSystemdEnv 43.24
52 TestHyperKitDriverInstallOrUpdate 8.2
55 TestErrorSpam/setup 35.49
56 TestErrorSpam/start 1.69
57 TestErrorSpam/status 0.51
58 TestErrorSpam/pause 1.34
59 TestErrorSpam/unpause 1.37
60 TestErrorSpam/stop 155.86
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 82.01
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 40.47
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.95
72 TestFunctional/serial/CacheCmd/cache/add_local 1.53
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.06
77 TestFunctional/serial/CacheCmd/cache/delete 0.17
78 TestFunctional/serial/MinikubeKubectlCmd 0.95
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.36
80 TestFunctional/serial/ExtraConfig 41.84
81 TestFunctional/serial/ComponentHealth 0.05
82 TestFunctional/serial/LogsCmd 2.79
83 TestFunctional/serial/LogsFileCmd 2.84
84 TestFunctional/serial/InvalidService 4.63
86 TestFunctional/parallel/ConfigCmd 0.54
87 TestFunctional/parallel/DashboardCmd 13.96
88 TestFunctional/parallel/DryRun 0.98
89 TestFunctional/parallel/InternationalLanguage 0.59
90 TestFunctional/parallel/StatusCmd 0.51
94 TestFunctional/parallel/ServiceCmdConnect 7.57
95 TestFunctional/parallel/AddonsCmd 0.27
96 TestFunctional/parallel/PersistentVolumeClaim 28.14
98 TestFunctional/parallel/SSHCmd 0.3
99 TestFunctional/parallel/CpCmd 1.13
100 TestFunctional/parallel/MySQL 25.76
101 TestFunctional/parallel/FileSync 0.25
102 TestFunctional/parallel/CertSync 1.36
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
110 TestFunctional/parallel/License 0.54
111 TestFunctional/parallel/Version/short 0.11
112 TestFunctional/parallel/Version/components 0.52
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.88
118 TestFunctional/parallel/ImageCommands/Setup 2.32
119 TestFunctional/parallel/DockerEnv/bash 0.78
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.74
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.29
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.27
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.25
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.42
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.37
130 TestFunctional/parallel/ServiceCmd/DeployApp 12.13
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.39
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
136 TestFunctional/parallel/ServiceCmd/List 0.36
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
139 TestFunctional/parallel/ServiceCmd/Format 0.27
140 TestFunctional/parallel/ServiceCmd/URL 0.28
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
148 TestFunctional/parallel/ProfileCmd/profile_list 0.3
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
150 TestFunctional/parallel/MountCmd/any-port 6.05
151 TestFunctional/parallel/MountCmd/specific-port 1.52
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
153 TestFunctional/delete_addon-resizer_images 0.12
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.05
159 TestMultiControlPlane/serial/StartCluster 206.89
160 TestMultiControlPlane/serial/DeployApp 5.09
161 TestMultiControlPlane/serial/PingHostFromPods 1.47
162 TestMultiControlPlane/serial/AddWorkerNode 157.64
163 TestMultiControlPlane/serial/NodeLabels 0.05
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.39
165 TestMultiControlPlane/serial/CopyFile 9.65
166 TestMultiControlPlane/serial/StopSecondaryNode 8.71
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.3
168 TestMultiControlPlane/serial/RestartSecondaryNode 162.81
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 324.37
171 TestMultiControlPlane/serial/DeleteSecondaryNode 8.22
173 TestMultiControlPlane/serial/StopCluster 249.54
174 TestMultiControlPlane/serial/RestartCluster 118.69
182 TestJSONOutput/start/Command 51.15
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.48
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.46
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 8.34
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.78
210 TestMainNoArgs 0.09
211 TestMinikubeProfile 206.01
214 TestMountStart/serial/StartWithMountFirst 18.19
215 TestMountStart/serial/VerifyMountFirst 0.31
216 TestMountStart/serial/StartWithMountSecond 18.52
217 TestMountStart/serial/VerifyMountSecond 0.31
218 TestMountStart/serial/DeleteFirst 2.38
219 TestMountStart/serial/VerifyMountPostDelete 0.31
220 TestMountStart/serial/Stop 2.4
221 TestMountStart/serial/RestartStopped 18.75
222 TestMountStart/serial/VerifyMountPostStop 0.31
225 TestMultiNode/serial/FreshStart2Nodes 100.36
226 TestMultiNode/serial/DeployApp2Nodes 4.36
227 TestMultiNode/serial/PingHostFrom2Pods 0.93
228 TestMultiNode/serial/AddNode 37.07
229 TestMultiNode/serial/MultiNodeLabels 0.05
230 TestMultiNode/serial/ProfileList 0.21
231 TestMultiNode/serial/CopyFile 5.44
232 TestMultiNode/serial/StopNode 2.88
238 TestMultiNode/serial/ValidateNameConflict 49.11
242 TestPreload 160.61
244 TestScheduledStopUnix 109.65
245 TestSkaffold 233.35
248 TestRunningBinaryUpgrade 84.43
250 TestKubernetesUpgrade 116.68
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.06
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.73
265 TestStoppedBinaryUpgrade/Setup 0.99
266 TestStoppedBinaryUpgrade/Upgrade 87.23
268 TestPause/serial/Start 63.02
269 TestPause/serial/SecondStartNoReconfiguration 37.44
270 TestStoppedBinaryUpgrade/MinikubeLogs 3.45
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.63
280 TestNoKubernetes/serial/StartWithK8s 40.43
281 TestPause/serial/Pause 0.54
282 TestPause/serial/VerifyStatus 0.17
283 TestPause/serial/Unpause 0.55
284 TestPause/serial/PauseAgain 0.6
285 TestPause/serial/DeletePaused 5.28
286 TestPause/serial/VerifyDeletedResources 0.21
288 TestNoKubernetes/serial/StartWithStopK8s 17.38
289 TestNoKubernetes/serial/Start 20.95
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.14
293 TestNoKubernetes/serial/ProfileList 0.55
294 TestNoKubernetes/serial/Stop 2.39
298 TestNoKubernetes/serial/StartNoArgs 19.24
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (25.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (25.766223132s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-861000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-861000: exit status 85 (445.202251ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT |          |
	|         | -p download-only-861000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 11:30:29
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 11:30:29.194083    1887 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:30:29.194278    1887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:30:29.194283    1887 out.go:304] Setting ErrFile to fd 2...
	I0425 11:30:29.194287    1887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:30:29.194454    1887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	W0425 11:30:29.194555    1887 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18757-1425/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18757-1425/.minikube/config/config.json: no such file or directory
	I0425 11:30:29.196364    1887 out.go:298] Setting JSON to true
	I0425 11:30:29.220673    1887 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1799,"bootTime":1714068030,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 11:30:29.220770    1887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:30:29.242727    1887 out.go:97] [download-only-861000] minikube v1.33.0 on Darwin 14.4.1
	I0425 11:30:29.264509    1887 out.go:169] MINIKUBE_LOCATION=18757
	I0425 11:30:29.242896    1887 notify.go:220] Checking for updates...
	W0425 11:30:29.242930    1887 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball: no such file or directory
	I0425 11:30:29.307384    1887 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 11:30:29.328494    1887 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:30:29.349367    1887 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:30:29.370602    1887 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	W0425 11:30:29.412444    1887 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0425 11:30:29.413083    1887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:30:29.490528    1887 out.go:97] Using the hyperkit driver based on user configuration
	I0425 11:30:29.490569    1887 start.go:297] selected driver: hyperkit
	I0425 11:30:29.490590    1887 start.go:901] validating driver "hyperkit" against <nil>
	I0425 11:30:29.490795    1887 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 11:30:29.491129    1887 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18757-1425/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0425 11:30:29.719346    1887 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0425 11:30:29.724042    1887 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:30:29.724066    1887 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0425 11:30:29.724091    1887 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 11:30:29.728276    1887 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0425 11:30:29.728426    1887 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 11:30:29.728488    1887 cni.go:84] Creating CNI manager for ""
	I0425 11:30:29.728505    1887 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0425 11:30:29.728594    1887 start.go:340] cluster config:
	{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:30:29.728830    1887 iso.go:125] acquiring lock: {Name:mk776ce15f524979e50f0732af6183703dc958eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 11:30:29.750266    1887 out.go:97] Downloading VM boot image ...
	I0425 11:30:29.750383    1887 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 11:30:40.294903    1887 out.go:97] Starting "download-only-861000" primary control-plane node in "download-only-861000" cluster
	I0425 11:30:40.294943    1887 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0425 11:30:40.355973    1887 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0425 11:30:40.356015    1887 cache.go:56] Caching tarball of preloaded images
	I0425 11:30:40.356316    1887 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0425 11:30:40.383740    1887 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0425 11:30:40.383761    1887 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:40.479123    1887 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0425 11:30:48.816322    1887 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:48.816558    1887 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:49.363966    1887 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0425 11:30:49.364216    1887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/download-only-861000/config.json ...
	I0425 11:30:49.364241    1887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/download-only-861000/config.json: {Name:mk9ba62f7d3cacae120ac8146ad86bb0d57faf5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 11:30:49.364542    1887 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0425 11:30:49.364850    1887 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-861000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-861000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-861000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (13.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-321000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-321000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit : (13.107789901s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (13.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-321000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-321000: exit status 85 (318.407453ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT | 25 Apr 24 11:30 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT | 25 Apr 24 11:30 PDT |
	| start   | -o=json --download-only        | download-only-321000 | jenkins | v1.33.0 | 25 Apr 24 11:30 PDT |                     |
	|         | -p download-only-321000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 11:30:56
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 11:30:56.167588    1932 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:30:56.167844    1932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:30:56.167849    1932 out.go:304] Setting ErrFile to fd 2...
	I0425 11:30:56.167853    1932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:30:56.168031    1932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 11:30:56.169503    1932 out.go:298] Setting JSON to true
	I0425 11:30:56.191469    1932 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1826,"bootTime":1714068030,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 11:30:56.191549    1932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:30:56.212954    1932 out.go:97] [download-only-321000] minikube v1.33.0 on Darwin 14.4.1
	I0425 11:30:56.234443    1932 out.go:169] MINIKUBE_LOCATION=18757
	I0425 11:30:56.213189    1932 notify.go:220] Checking for updates...
	I0425 11:30:56.276728    1932 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 11:30:56.297740    1932 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:30:56.318698    1932 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:30:56.339406    1932 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	W0425 11:30:56.381608    1932 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0425 11:30:56.382128    1932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:30:56.411527    1932 out.go:97] Using the hyperkit driver based on user configuration
	I0425 11:30:56.411599    1932 start.go:297] selected driver: hyperkit
	I0425 11:30:56.411618    1932 start.go:901] validating driver "hyperkit" against <nil>
	I0425 11:30:56.411827    1932 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 11:30:56.412057    1932 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18757-1425/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0425 11:30:56.421733    1932 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0425 11:30:56.425482    1932 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:30:56.425502    1932 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0425 11:30:56.425524    1932 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 11:30:56.428152    1932 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0425 11:30:56.428310    1932 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 11:30:56.428363    1932 cni.go:84] Creating CNI manager for ""
	I0425 11:30:56.428379    1932 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0425 11:30:56.428387    1932 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 11:30:56.428452    1932 start.go:340] cluster config:
	{Name:download-only-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:30:56.428538    1932 iso.go:125] acquiring lock: {Name:mk776ce15f524979e50f0732af6183703dc958eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 11:30:56.449513    1932 out.go:97] Starting "download-only-321000" primary control-plane node in "download-only-321000" cluster
	I0425 11:30:56.449565    1932 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 11:30:56.503407    1932 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 11:30:56.503497    1932 cache.go:56] Caching tarball of preloaded images
	I0425 11:30:56.503958    1932 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 11:30:56.525881    1932 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0425 11:30:56.525927    1932 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:30:56.608630    1932 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0425 11:31:03.443252    1932 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:31:03.443449    1932 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0425 11:31:03.927751    1932 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0425 11:31:03.928116    1932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/download-only-321000/config.json ...
	I0425 11:31:03.928140    1932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/download-only-321000/config.json: {Name:mkab6eb2f3416d905d581ec91d30d007099a1267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 11:31:03.928533    1932 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0425 11:31:03.928832    1932 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18757-1425/.minikube/cache/darwin/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-321000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-321000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-321000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-658000 --alsologtostderr --binary-mirror http://127.0.0.1:49342 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-658000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-658000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestOffline (62.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-402000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-402000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (56.696234829s)
helpers_test.go:175: Cleaning up "offline-docker-402000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-402000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-402000: (5.557571265s)
--- PASS: TestOffline (62.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-504000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-504000: exit status 85 (217.519981ms)

                                                
                                                
-- stdout --
	* Profile "addons-504000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-504000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-504000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-504000: exit status 85 (196.940571ms)

                                                
                                                
-- stdout --
	* Profile "addons-504000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-504000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (141.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-504000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-504000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m21.819398408s)
--- PASS: TestAddons/Setup (141.82s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 10.144445ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-krx89" [72b34dee-8d56-4d3a-a3b9-bc2d3f6f8fbf] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003849285s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cvcxd" [d8cb2922-f452-4907-a885-7111c588b21d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003947583s
addons_test.go:340: (dbg) Run:  kubectl --context addons-504000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-504000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-504000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.709200937s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 ip
2024/04/25 11:33:49 [DEBUG] GET http://192.169.0.3:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-504000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-504000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-504000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [60961394-6545-4f37-be71-a1d214b78810] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [60961394-6545-4f37-be71-a1d214b78810] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005733974s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-504000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.169.0.3
addons_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p addons-504000 addons disable ingress --alsologtostderr -v=1: (7.463752529s)
--- PASS: TestAddons/parallel/Ingress (19.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.6s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-95crr" [2eb39356-d05b-4035-9035-536c2c8625d5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005155634s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-504000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-504000: (5.589318528s)
--- PASS: TestAddons/parallel/InspektorGadget (10.60s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 1.662696ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-cl9vt" [423dbf51-aebd-43ea-bb55-ab38580cc0e5] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004504397s
addons_test.go:415: (dbg) Run:  kubectl --context addons-504000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 1.945928ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-8zn48" [1b3cc605-9033-4d85-8f20-c86dab5fb7f2] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004235121s
addons_test.go:473: (dbg) Run:  kubectl --context addons-504000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-504000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.571688142s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (82.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 11.457004ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-504000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-504000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3f383109-1974-4f12-907f-0a08055b3ff0] Pending
helpers_test.go:344: "task-pv-pod" [3f383109-1974-4f12-907f-0a08055b3ff0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3f383109-1974-4f12-907f-0a08055b3ff0] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004804282s
addons_test.go:584: (dbg) Run:  kubectl --context addons-504000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-504000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-504000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-504000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-504000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-504000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-504000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [057c80ba-10f1-448f-8575-a36add9d96d8] Pending
helpers_test.go:344: "task-pv-pod-restore" [057c80ba-10f1-448f-8575-a36add9d96d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [057c80ba-10f1-448f-8575-a36add9d96d8] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002247143s
addons_test.go:626: (dbg) Run:  kubectl --context addons-504000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-504000 delete pod task-pv-pod-restore: (1.027345002s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-504000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-504000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-504000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.43390524s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (82.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-504000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-xg22h" [5b88cc66-5482-43d1-9925-4cf610147cec] Pending
helpers_test.go:344: "headlamp-7559bf459f-xg22h" [5b88cc66-5482-43d1-9925-4cf610147cec] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-xg22h" [5b88cc66-5482-43d1-9925-4cf610147cec] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005259195s
--- PASS: TestAddons/parallel/Headlamp (11.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-wgzdr" [af320230-0a41-4c4b-aca5-8876ce229664] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003889122s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-504000
--- PASS: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-504000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-504000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [28e2c77d-c6ca-4e7e-a1ea-b56384f82508] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [28e2c77d-c6ca-4e7e-a1ea-b56384f82508] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [28e2c77d-c6ca-4e7e-a1ea-b56384f82508] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003969925s
addons_test.go:891: (dbg) Run:  kubectl --context addons-504000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 ssh "cat /opt/local-path-provisioner/pvc-4e986424-29b2-496c-ab90-2732c5fee0de_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-504000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-504000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-504000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-504000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.778081472s)
--- PASS: TestAddons/parallel/LocalPath (53.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p5mwk" [4ee2d978-58f8-44ed-91c9-174af095892a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004672375s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-504000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.33s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-bfm4q" [dc026c47-ca21-4f55-bf4d-4563d23c8f6e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004159505s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-504000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-504000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-504000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-504000: (5.3858705s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-504000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-504000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-504000
--- PASS: TestAddons/StoppedEnableDisable (5.97s)

                                                
                                    
x
+
TestCertOptions (42.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-434000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0425 12:47:26.175729    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-434000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (36.75175947s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-434000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-434000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-434000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-434000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-434000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-434000: (5.276456981s)
--- PASS: TestCertOptions (42.41s)

                                                
                                    
x
+
TestCertExpiration (252.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (37.834343674s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0425 12:50:38.646307    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-355000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (29.708791659s)
helpers_test.go:175: Cleaning up "cert-expiration-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-355000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-355000: (5.296672627s)
--- PASS: TestCertExpiration (252.84s)

                                                
                                    
x
+
TestDockerFlags (50.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-106000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-106000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (44.794574646s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-106000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-106000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-106000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-106000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-106000: (5.288100403s)
--- PASS: TestDockerFlags (50.43s)

                                                
                                    
x
+
TestForceSystemdFlag (41.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-347000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-347000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (36.210588892s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-347000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-347000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-347000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-347000: (5.313408326s)
--- PASS: TestForceSystemdFlag (41.70s)

                                                
                                    
x
+
TestForceSystemdEnv (43.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-027000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-027000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (37.752139827s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-027000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-027000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-027000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-027000: (5.282025535s)
--- PASS: TestForceSystemdEnv (43.24s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.2s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.20s)

                                                
                                    
x
+
TestErrorSpam/setup (35.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-360000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-360000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 --driver=hyperkit : (35.493072869s)
--- PASS: TestErrorSpam/setup (35.49s)

                                                
                                    
x
+
TestErrorSpam/start (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 start --dry-run
--- PASS: TestErrorSpam/start (1.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.51s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 status
--- PASS: TestErrorSpam/status (0.51s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (155.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 stop: (5.394067038s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 stop: (1m15.223007897s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 stop
E0425 11:38:34.122120    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.129226    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.141463    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.162281    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.202414    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.283195    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.444226    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:34.764435    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:35.406737    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:36.687019    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:39.249308    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:44.371630    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:38:54.612806    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:39:15.095702    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-360000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-360000 stop: (1m15.238005704s)
--- PASS: TestErrorSpam/stop (155.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18757-1425/.minikube/files/etc/test/nested/copy/1885/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-380000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0425 11:39:56.058571    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-380000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m22.012557987s)
--- PASS: TestFunctional/serial/StartWithProxy (82.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-380000 --alsologtostderr -v=8
E0425 11:41:17.981776    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-380000 --alsologtostderr -v=8: (40.464501995s)
functional_test.go:659: soft start took 40.464954556s for "functional-380000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-380000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 cache add registry.k8s.io/pause:3.1: (1.070536697s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4017847006/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cache add minikube-local-cache-test:functional-380000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cache delete minikube-local-cache-test:functional-380000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-380000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (148.827274ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 kubectl -- --context functional-380000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-380000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-380000 get pods: (1.355708039s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.36s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-380000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-380000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.84112897s)
functional_test.go:757: restart took 41.841292287s for "functional-380000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-380000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 logs: (2.793133377s)
--- PASS: TestFunctional/serial/LogsCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2217993776/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2217993776/001/logs.txt: (2.838024774s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-380000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-380000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-380000: exit status 115 (269.170929ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.5:30454 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-380000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-380000 delete -f testdata/invalidsvc.yaml: (1.211140341s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 config get cpus: exit status 14 (70.684809ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 config get cpus: exit status 14 (65.460838ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-380000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-380000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3273: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-380000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-380000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (497.994206ms)

                                                
                                                
-- stdout --
	* [functional-380000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:43:22.397869    3228 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:43:22.398161    3228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:43:22.398167    3228 out.go:304] Setting ErrFile to fd 2...
	I0425 11:43:22.398171    3228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:43:22.398357    3228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 11:43:22.399725    3228 out.go:298] Setting JSON to false
	I0425 11:43:22.421572    3228 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2572,"bootTime":1714068030,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 11:43:22.421659    3228 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:43:22.443082    3228 out.go:177] * [functional-380000] minikube v1.33.0 on Darwin 14.4.1
	I0425 11:43:22.484928    3228 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 11:43:22.484972    3228 notify.go:220] Checking for updates...
	I0425 11:43:22.507049    3228 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 11:43:22.548843    3228 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:43:22.569957    3228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:43:22.591186    3228 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 11:43:22.611921    3228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 11:43:22.633392    3228 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:43:22.633869    3228 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:43:22.633917    3228 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:43:22.642816    3228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50587
	I0425 11:43:22.643194    3228 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:43:22.643604    3228 main.go:141] libmachine: Using API Version  1
	I0425 11:43:22.643617    3228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:43:22.643837    3228 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:43:22.643964    3228 main.go:141] libmachine: (functional-380000) Calling .DriverName
	I0425 11:43:22.644151    3228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:43:22.644382    3228 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:43:22.644402    3228 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:43:22.652837    3228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50589
	I0425 11:43:22.653168    3228 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:43:22.653495    3228 main.go:141] libmachine: Using API Version  1
	I0425 11:43:22.653504    3228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:43:22.653716    3228 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:43:22.653837    3228 main.go:141] libmachine: (functional-380000) Calling .DriverName
	I0425 11:43:22.681859    3228 out.go:177] * Using the hyperkit driver based on existing profile
	I0425 11:43:22.723741    3228 start.go:297] selected driver: hyperkit
	I0425 11:43:22.723759    3228 start.go:901] validating driver "hyperkit" against &{Name:functional-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:functional-380000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:43:22.723873    3228 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 11:43:22.747860    3228 out.go:177] 
	W0425 11:43:22.768871    3228 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0425 11:43:22.789769    3228 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-380000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-380000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-380000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (589.538475ms)

                                                
                                                
-- stdout --
	* [functional-380000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:43:23.310172    3248 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:43:23.310642    3248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:43:23.310653    3248 out.go:304] Setting ErrFile to fd 2...
	I0425 11:43:23.310660    3248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:43:23.310986    3248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 11:43:23.313128    3248 out.go:298] Setting JSON to false
	I0425 11:43:23.336533    3248 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2573,"bootTime":1714068030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0425 11:43:23.336623    3248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0425 11:43:23.357681    3248 out.go:177] * [functional-380000] minikube v1.33.0 sur Darwin 14.4.1
	I0425 11:43:23.421628    3248 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 11:43:23.399879    3248 notify.go:220] Checking for updates...
	I0425 11:43:23.463686    3248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	I0425 11:43:23.484592    3248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0425 11:43:23.505696    3248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 11:43:23.547542    3248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	I0425 11:43:23.610655    3248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 11:43:23.632529    3248 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:43:23.633032    3248 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:43:23.633089    3248 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:43:23.642217    3248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50609
	I0425 11:43:23.642583    3248 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:43:23.643000    3248 main.go:141] libmachine: Using API Version  1
	I0425 11:43:23.643009    3248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:43:23.643280    3248 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:43:23.643409    3248 main.go:141] libmachine: (functional-380000) Calling .DriverName
	I0425 11:43:23.643606    3248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 11:43:23.643868    3248 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:43:23.643890    3248 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:43:23.652248    3248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50611
	I0425 11:43:23.652583    3248 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:43:23.652964    3248 main.go:141] libmachine: Using API Version  1
	I0425 11:43:23.652980    3248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:43:23.653200    3248 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:43:23.653304    3248 main.go:141] libmachine: (functional-380000) Calling .DriverName
	I0425 11:43:23.681542    3248 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0425 11:43:23.723712    3248 start.go:297] selected driver: hyperkit
	I0425 11:43:23.723730    3248 start.go:901] validating driver "hyperkit" against &{Name:functional-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:functional-380000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 11:43:23.723878    3248 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 11:43:23.747597    3248 out.go:177] 
	W0425 11:43:23.768867    3248 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0425 11:43:23.789740    3248 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-380000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-380000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-bgl4b" [39e4e9a1-23f5-425c-afe3-f7f175d41286] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-bgl4b" [39e4e9a1-23f5-425c-afe3-f7f175d41286] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003704691s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.5:31444
functional_test.go:1671: http://192.169.0.5:31444: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-bgl4b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.5:31444
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6d9386aa-9586-4824-9853-929870649012] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004206848s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-380000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-380000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-380000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-380000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c1e2e0d-c071-4db2-8311-df294b572dc3] Pending
helpers_test.go:344: "sp-pod" [6c1e2e0d-c071-4db2-8311-df294b572dc3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c1e2e0d-c071-4db2-8311-df294b572dc3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004570379s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-380000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-380000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-380000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [526b51df-d1c7-4bc5-9bfb-8f7ba16bd335] Pending
helpers_test.go:344: "sp-pod" [526b51df-d1c7-4bc5-9bfb-8f7ba16bd335] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [526b51df-d1c7-4bc5-9bfb-8f7ba16bd335] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005379142s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-380000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh -n functional-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cp functional-380000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd3139762450/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh -n functional-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh -n functional-380000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-380000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-5cgql" [92642464-0d53-4fce-8590-69df0a3aeebb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-5cgql" [92642464-0d53-4fce-8590-69df0a3aeebb] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003013084s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-380000 exec mysql-64454c8b5c-5cgql -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-380000 exec mysql-64454c8b5c-5cgql -- mysql -ppassword -e "show databases;": exit status 1 (170.41218ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-380000 exec mysql-64454c8b5c-5cgql -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-380000 exec mysql-64454c8b5c-5cgql -- mysql -ppassword -e "show databases;": exit status 1 (135.034299ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-380000 exec mysql-64454c8b5c-5cgql -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1885/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /etc/test/nested/copy/1885/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1885.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /etc/ssl/certs/1885.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1885.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /usr/share/ca-certificates/1885.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/18852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /etc/ssl/certs/18852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/18852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /usr/share/ca-certificates/18852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-380000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh "sudo systemctl is-active crio": exit status 1 (171.171512ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-380000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-380000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-380000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-380000 image ls --format short --alsologtostderr:
I0425 11:43:25.260416    3280 out.go:291] Setting OutFile to fd 1 ...
I0425 11:43:25.260738    3280 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:25.260744    3280 out.go:304] Setting ErrFile to fd 2...
I0425 11:43:25.260748    3280 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:25.260917    3280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
I0425 11:43:25.261543    3280 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:25.261636    3280 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:25.261986    3280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:25.262028    3280 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:25.270400    3280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50652
I0425 11:43:25.270796    3280 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:25.271204    3280 main.go:141] libmachine: Using API Version  1
I0425 11:43:25.271236    3280 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:25.271502    3280 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:25.271647    3280 main.go:141] libmachine: (functional-380000) Calling .GetState
I0425 11:43:25.271737    3280 main.go:141] libmachine: (functional-380000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 11:43:25.271811    3280 main.go:141] libmachine: (functional-380000) DBG | hyperkit pid from json: 2514
I0425 11:43:25.273095    3280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:25.273129    3280 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:25.281479    3280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50654
I0425 11:43:25.281817    3280 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:25.282161    3280 main.go:141] libmachine: Using API Version  1
I0425 11:43:25.282174    3280 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:25.282460    3280 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:25.282594    3280 main.go:141] libmachine: (functional-380000) Calling .DriverName
I0425 11:43:25.282788    3280 ssh_runner.go:195] Run: systemctl --version
I0425 11:43:25.282808    3280 main.go:141] libmachine: (functional-380000) Calling .GetSSHHostname
I0425 11:43:25.282895    3280 main.go:141] libmachine: (functional-380000) Calling .GetSSHPort
I0425 11:43:25.282987    3280 main.go:141] libmachine: (functional-380000) Calling .GetSSHKeyPath
I0425 11:43:25.283078    3280 main.go:141] libmachine: (functional-380000) Calling .GetSSHUsername
I0425 11:43:25.283165    3280 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/functional-380000/id_rsa Username:docker}
I0425 11:43:25.314331    3280 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0425 11:43:25.330805    3280 main.go:141] libmachine: Making call to close driver server
I0425 11:43:25.330814    3280 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:25.331013    3280 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:25.331025    3280 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:25.331043    3280 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:25.331056    3280 main.go:141] libmachine: Making call to close driver server
I0425 11:43:25.331064    3280 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:25.331196    3280 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:25.331199    3280 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:25.331212    3280 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-380000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/localhost/my-image                | functional-380000 | 16b96a4062db8 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-380000 | dff26ad519932 | 30B    |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-380000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-380000 image ls --format table --alsologtostderr:
I0425 11:43:27.603321    3305 out.go:291] Setting OutFile to fd 1 ...
I0425 11:43:27.603533    3305 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:27.603538    3305 out.go:304] Setting ErrFile to fd 2...
I0425 11:43:27.603542    3305 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:27.603734    3305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
I0425 11:43:27.605301    3305 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:27.605413    3305 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:27.605734    3305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:27.605785    3305 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:27.613988    3305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50689
I0425 11:43:27.614433    3305 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:27.614821    3305 main.go:141] libmachine: Using API Version  1
I0425 11:43:27.614830    3305 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:27.615076    3305 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:27.615185    3305 main.go:141] libmachine: (functional-380000) Calling .GetState
I0425 11:43:27.615263    3305 main.go:141] libmachine: (functional-380000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 11:43:27.615334    3305 main.go:141] libmachine: (functional-380000) DBG | hyperkit pid from json: 2514
I0425 11:43:27.616638    3305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:27.616671    3305 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:27.624796    3305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50691
I0425 11:43:27.625101    3305 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:27.625469    3305 main.go:141] libmachine: Using API Version  1
I0425 11:43:27.625485    3305 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:27.625725    3305 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:27.625861    3305 main.go:141] libmachine: (functional-380000) Calling .DriverName
I0425 11:43:27.626015    3305 ssh_runner.go:195] Run: systemctl --version
I0425 11:43:27.626034    3305 main.go:141] libmachine: (functional-380000) Calling .GetSSHHostname
I0425 11:43:27.626108    3305 main.go:141] libmachine: (functional-380000) Calling .GetSSHPort
I0425 11:43:27.626199    3305 main.go:141] libmachine: (functional-380000) Calling .GetSSHKeyPath
I0425 11:43:27.626286    3305 main.go:141] libmachine: (functional-380000) Calling .GetSSHUsername
I0425 11:43:27.626374    3305 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/functional-380000/id_rsa Username:docker}
I0425 11:43:27.657881    3305 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0425 11:43:27.673374    3305 main.go:141] libmachine: Making call to close driver server
I0425 11:43:27.673383    3305 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:27.673524    3305 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:27.673541    3305 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:27.673545    3305 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:27.673571    3305 main.go:141] libmachine: Making call to close driver server
I0425 11:43:27.673577    3305 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:27.673699    3305 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:27.673709    3305 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:27.673721    3305 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
E0425 11:43:34.130844    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
2024/04/25 11:43:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-380000 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"dff26ad519932f87b30f2e2a219c86556f284a49055ec9ffcc2df63503ef77cf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-380000"],"size":"30"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaa
c7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/l
ibrary/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-380000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"16b96a4062db8851714db9a79ef247a9261a7b02e36d3f2d9455ddda517f1252","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-380000"],"size":"1240000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"be
ae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-380000 image ls --format json --alsologtostderr:
I0425 11:43:27.447302    3301 out.go:291] Setting OutFile to fd 1 ...
I0425 11:43:27.447488    3301 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:27.447494    3301 out.go:304] Setting ErrFile to fd 2...
I0425 11:43:27.447497    3301 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:27.448432    3301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
I0425 11:43:27.449271    3301 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:27.449365    3301 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:27.449702    3301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:27.449751    3301 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:27.458068    3301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50684
I0425 11:43:27.458468    3301 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:27.458893    3301 main.go:141] libmachine: Using API Version  1
I0425 11:43:27.458902    3301 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:27.459151    3301 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:27.459280    3301 main.go:141] libmachine: (functional-380000) Calling .GetState
I0425 11:43:27.459365    3301 main.go:141] libmachine: (functional-380000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 11:43:27.459439    3301 main.go:141] libmachine: (functional-380000) DBG | hyperkit pid from json: 2514
I0425 11:43:27.460725    3301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:27.460747    3301 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:27.469401    3301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50686
I0425 11:43:27.469740    3301 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:27.470130    3301 main.go:141] libmachine: Using API Version  1
I0425 11:43:27.470155    3301 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:27.470361    3301 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:27.470465    3301 main.go:141] libmachine: (functional-380000) Calling .DriverName
I0425 11:43:27.470626    3301 ssh_runner.go:195] Run: systemctl --version
I0425 11:43:27.470645    3301 main.go:141] libmachine: (functional-380000) Calling .GetSSHHostname
I0425 11:43:27.470721    3301 main.go:141] libmachine: (functional-380000) Calling .GetSSHPort
I0425 11:43:27.470832    3301 main.go:141] libmachine: (functional-380000) Calling .GetSSHKeyPath
I0425 11:43:27.470919    3301 main.go:141] libmachine: (functional-380000) Calling .GetSSHUsername
I0425 11:43:27.471004    3301 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/functional-380000/id_rsa Username:docker}
I0425 11:43:27.501463    3301 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0425 11:43:27.517882    3301 main.go:141] libmachine: Making call to close driver server
I0425 11:43:27.517891    3301 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:27.518042    3301 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:27.518042    3301 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:27.518054    3301 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:27.518059    3301 main.go:141] libmachine: Making call to close driver server
I0425 11:43:27.518063    3301 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:27.518207    3301 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:27.518217    3301 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:27.518227    3301 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-380000 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: dff26ad519932f87b30f2e2a219c86556f284a49055ec9ffcc2df63503ef77cf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-380000
size: "30"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-380000
size: "32900000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-380000 image ls --format yaml --alsologtostderr:
I0425 11:43:25.417107    3284 out.go:291] Setting OutFile to fd 1 ...
I0425 11:43:25.417321    3284 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:25.417327    3284 out.go:304] Setting ErrFile to fd 2...
I0425 11:43:25.417331    3284 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:25.417521    3284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
I0425 11:43:25.418219    3284 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:25.418319    3284 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:25.418672    3284 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:25.418718    3284 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:25.426916    3284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50658
I0425 11:43:25.427402    3284 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:25.427839    3284 main.go:141] libmachine: Using API Version  1
I0425 11:43:25.427876    3284 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:25.428072    3284 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:25.428188    3284 main.go:141] libmachine: (functional-380000) Calling .GetState
I0425 11:43:25.428271    3284 main.go:141] libmachine: (functional-380000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 11:43:25.428335    3284 main.go:141] libmachine: (functional-380000) DBG | hyperkit pid from json: 2514
I0425 11:43:25.429619    3284 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:25.429639    3284 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:25.437967    3284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50660
I0425 11:43:25.438320    3284 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:25.438655    3284 main.go:141] libmachine: Using API Version  1
I0425 11:43:25.438667    3284 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:25.438918    3284 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:25.439048    3284 main.go:141] libmachine: (functional-380000) Calling .DriverName
I0425 11:43:25.439213    3284 ssh_runner.go:195] Run: systemctl --version
I0425 11:43:25.439231    3284 main.go:141] libmachine: (functional-380000) Calling .GetSSHHostname
I0425 11:43:25.439308    3284 main.go:141] libmachine: (functional-380000) Calling .GetSSHPort
I0425 11:43:25.439395    3284 main.go:141] libmachine: (functional-380000) Calling .GetSSHKeyPath
I0425 11:43:25.439480    3284 main.go:141] libmachine: (functional-380000) Calling .GetSSHUsername
I0425 11:43:25.439561    3284 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/functional-380000/id_rsa Username:docker}
I0425 11:43:25.470681    3284 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0425 11:43:25.485342    3284 main.go:141] libmachine: Making call to close driver server
I0425 11:43:25.485350    3284 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:25.485497    3284 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:25.485536    3284 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:25.485551    3284 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:25.485564    3284 main.go:141] libmachine: Making call to close driver server
I0425 11:43:25.485569    3284 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:25.485700    3284 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:25.485708    3284 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:25.485709    3284 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh pgrep buildkitd: exit status 1 (129.31531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image build -t localhost/my-image:functional-380000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image build -t localhost/my-image:functional-380000 testdata/build --alsologtostderr: (1.589281959s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-380000 image build -t localhost/my-image:functional-380000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 5d73faec973e
---> Removed intermediate container 5d73faec973e
---> 26175dca09f5
Step 3/3 : ADD content.txt /
---> 16b96a4062db
Successfully built 16b96a4062db
Successfully tagged localhost/my-image:functional-380000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-380000 image build -t localhost/my-image:functional-380000 testdata/build --alsologtostderr:
I0425 11:43:25.700968    3293 out.go:291] Setting OutFile to fd 1 ...
I0425 11:43:25.701230    3293 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:25.701236    3293 out.go:304] Setting ErrFile to fd 2...
I0425 11:43:25.701240    3293 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 11:43:25.701415    3293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
I0425 11:43:25.702052    3293 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:25.702687    3293 config.go:182] Loaded profile config "functional-380000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0425 11:43:25.703043    3293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:25.703081    3293 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:25.711385    3293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50671
I0425 11:43:25.711847    3293 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:25.712262    3293 main.go:141] libmachine: Using API Version  1
I0425 11:43:25.712271    3293 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:25.712474    3293 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:25.712588    3293 main.go:141] libmachine: (functional-380000) Calling .GetState
I0425 11:43:25.712684    3293 main.go:141] libmachine: (functional-380000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0425 11:43:25.712754    3293 main.go:141] libmachine: (functional-380000) DBG | hyperkit pid from json: 2514
I0425 11:43:25.714027    3293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0425 11:43:25.714049    3293 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0425 11:43:25.722556    3293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50673
I0425 11:43:25.722897    3293 main.go:141] libmachine: () Calling .GetVersion
I0425 11:43:25.723281    3293 main.go:141] libmachine: Using API Version  1
I0425 11:43:25.723299    3293 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 11:43:25.723531    3293 main.go:141] libmachine: () Calling .GetMachineName
I0425 11:43:25.723662    3293 main.go:141] libmachine: (functional-380000) Calling .DriverName
I0425 11:43:25.723824    3293 ssh_runner.go:195] Run: systemctl --version
I0425 11:43:25.723844    3293 main.go:141] libmachine: (functional-380000) Calling .GetSSHHostname
I0425 11:43:25.723929    3293 main.go:141] libmachine: (functional-380000) Calling .GetSSHPort
I0425 11:43:25.724008    3293 main.go:141] libmachine: (functional-380000) Calling .GetSSHKeyPath
I0425 11:43:25.724085    3293 main.go:141] libmachine: (functional-380000) Calling .GetSSHUsername
I0425 11:43:25.724159    3293 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/functional-380000/id_rsa Username:docker}
I0425 11:43:25.756089    3293 build_images.go:161] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.4031145931.tar
I0425 11:43:25.756168    3293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0425 11:43:25.763926    3293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4031145931.tar
I0425 11:43:25.767395    3293 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4031145931.tar: stat -c "%s %y" /var/lib/minikube/build/build.4031145931.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4031145931.tar': No such file or directory
I0425 11:43:25.767438    3293 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.4031145931.tar --> /var/lib/minikube/build/build.4031145931.tar (3072 bytes)
I0425 11:43:25.790722    3293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4031145931
I0425 11:43:25.798868    3293 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4031145931 -xf /var/lib/minikube/build/build.4031145931.tar
I0425 11:43:25.806310    3293 docker.go:360] Building image: /var/lib/minikube/build/build.4031145931
I0425 11:43:25.806371    3293 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-380000 /var/lib/minikube/build/build.4031145931
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0425 11:43:27.183702    3293 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-380000 /var/lib/minikube/build/build.4031145931: (1.377273656s)
I0425 11:43:27.183763    3293 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4031145931
I0425 11:43:27.191994    3293 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4031145931.tar
I0425 11:43:27.204234    3293 build_images.go:217] Built localhost/my-image:functional-380000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.4031145931.tar
I0425 11:43:27.204263    3293 build_images.go:133] succeeded building to: functional-380000
I0425 11:43:27.204274    3293 build_images.go:134] failed building to: 
I0425 11:43:27.204292    3293 main.go:141] libmachine: Making call to close driver server
I0425 11:43:27.204299    3293 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:27.204479    3293 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:27.204493    3293 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 11:43:27.204496    3293 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:27.204498    3293 main.go:141] libmachine: Making call to close driver server
I0425 11:43:27.204507    3293 main.go:141] libmachine: (functional-380000) Calling .Close
I0425 11:43:27.204683    3293 main.go:141] libmachine: Successfully made call to close driver server
I0425 11:43:27.204684    3293 main.go:141] libmachine: (functional-380000) DBG | Closing plugin on server side
I0425 11:43:27.204693    3293 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.246967199s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-380000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-380000 docker-env) && out/minikube-darwin-amd64 status -p functional-380000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-380000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image load --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image load --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr: (3.578391875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image load --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image load --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr: (2.055121098s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.740296568s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-380000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image load --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image load --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr: (3.314286894s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image save gcr.io/google-containers/addon-resizer:functional-380000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image save gcr.io/google-containers/addon-resizer:functional-380000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.250157843s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image rm gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.256246196s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-380000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 image save --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-380000 image save --daemon gcr.io/google-containers/addon-resizer:functional-380000 --alsologtostderr: (1.264573553s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-380000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-380000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-380000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-hknqw" [be113483-d36a-4156-80a7-204e30250a9f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-hknqw" [be113483-d36a-4156-80a7-204e30250a9f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003476421s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-380000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-380000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-380000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-380000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2981: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-380000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-380000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8c286c72-05c8-437a-b0e7-4f5d1476aebc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8c286c72-05c8-437a-b0e7-4f5d1476aebc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004088874s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 service list -o json
functional_test.go:1490: Took "384.013151ms" to run "out/minikube-darwin-amd64 -p functional-380000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.5:32049
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.5:32049
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-380000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.185.146 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-380000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "211.673572ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.306818ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "208.442965ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "84.360989ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1996229797/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714070593099810000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1996229797/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714070593099810000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1996229797/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714070593099810000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1996229797/001/test-1714070593099810000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (163.172568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 25 18:43 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 25 18:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 25 18:43 test-1714070593099810000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh cat /mount-9p/test-1714070593099810000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-380000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [233ff7e7-9749-4ccd-b03c-b79e21a00d59] Pending
helpers_test.go:344: "busybox-mount" [233ff7e7-9749-4ccd-b03c-b79e21a00d59] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [233ff7e7-9749-4ccd-b03c-b79e21a00d59] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [233ff7e7-9749-4ccd-b03c-b79e21a00d59] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003661723s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-380000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1996229797/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port785475191/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.203118ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port785475191/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh "sudo umount -f /mount-9p": exit status 1 (134.130635ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-380000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port785475191/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1493546091/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1493546091/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1493546091/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T" /mount1: exit status 1 (175.601143ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-380000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-380000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1493546091/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1493546091/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-380000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1493546091/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-380000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-380000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-380000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-703000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0425 11:44:01.827563    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-703000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m26.505582052s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-703000 -- rollout status deployment/busybox: (2.723720611s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-dsqwr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-f5d78 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-h9ghq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-dsqwr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-f5d78 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-h9ghq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-dsqwr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-f5d78 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-h9ghq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-dsqwr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-dsqwr -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-f5d78 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-f5d78 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-h9ghq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-703000 -- exec busybox-fc5497c4f-h9ghq -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (157.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-703000 -v=7 --alsologtostderr
E0425 11:47:26.092138    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.098012    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.109629    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.129945    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.171020    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.251264    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.412354    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:26.732409    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:27.373463    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:28.655225    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:31.217118    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:36.337725    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:47:46.578840    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:48:07.060373    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:48:34.100667    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:48:48.021962    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-703000 -v=7 --alsologtostderr: (2m37.179140866s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (157.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-703000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp testdata/cp-test.txt ha-703000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile1398422841/001/cp-test_ha-703000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000:/home/docker/cp-test.txt ha-703000-m02:/home/docker/cp-test_ha-703000_ha-703000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test_ha-703000_ha-703000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000:/home/docker/cp-test.txt ha-703000-m03:/home/docker/cp-test_ha-703000_ha-703000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test_ha-703000_ha-703000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000:/home/docker/cp-test.txt ha-703000-m04:/home/docker/cp-test_ha-703000_ha-703000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test_ha-703000_ha-703000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp testdata/cp-test.txt ha-703000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile1398422841/001/cp-test_ha-703000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m02:/home/docker/cp-test.txt ha-703000:/home/docker/cp-test_ha-703000-m02_ha-703000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test_ha-703000-m02_ha-703000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m02:/home/docker/cp-test.txt ha-703000-m03:/home/docker/cp-test_ha-703000-m02_ha-703000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test_ha-703000-m02_ha-703000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m02:/home/docker/cp-test.txt ha-703000-m04:/home/docker/cp-test_ha-703000-m02_ha-703000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test_ha-703000-m02_ha-703000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp testdata/cp-test.txt ha-703000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile1398422841/001/cp-test_ha-703000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m03:/home/docker/cp-test.txt ha-703000:/home/docker/cp-test_ha-703000-m03_ha-703000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test_ha-703000-m03_ha-703000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m03:/home/docker/cp-test.txt ha-703000-m02:/home/docker/cp-test_ha-703000-m03_ha-703000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test_ha-703000-m03_ha-703000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m03:/home/docker/cp-test.txt ha-703000-m04:/home/docker/cp-test_ha-703000-m03_ha-703000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test_ha-703000-m03_ha-703000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp testdata/cp-test.txt ha-703000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m04:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiControlPlaneserialCopyFile1398422841/001/cp-test_ha-703000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m04:/home/docker/cp-test.txt ha-703000:/home/docker/cp-test_ha-703000-m04_ha-703000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000 "sudo cat /home/docker/cp-test_ha-703000-m04_ha-703000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m04:/home/docker/cp-test.txt ha-703000-m02:/home/docker/cp-test_ha-703000-m04_ha-703000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m02 "sudo cat /home/docker/cp-test_ha-703000-m04_ha-703000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 cp ha-703000-m04:/home/docker/cp-test.txt ha-703000-m03:/home/docker/cp-test_ha-703000-m04_ha-703000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 ssh -n ha-703000-m03 "sudo cat /home/docker/cp-test_ha-703000-m04_ha-703000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 node stop m02 -v=7 --alsologtostderr
E0425 11:50:09.945056    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-703000 node stop m02 -v=7 --alsologtostderr: (8.341433158s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr: exit status 7 (366.750743ms)

                                                
                                                
-- stdout --
	ha-703000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-703000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-703000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-703000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 11:50:12.316513    3911 out.go:291] Setting OutFile to fd 1 ...
	I0425 11:50:12.316724    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:50:12.316729    3911 out.go:304] Setting ErrFile to fd 2...
	I0425 11:50:12.316733    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 11:50:12.316919    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 11:50:12.317092    3911 out.go:298] Setting JSON to false
	I0425 11:50:12.317115    3911 mustload.go:65] Loading cluster: ha-703000
	I0425 11:50:12.317160    3911 notify.go:220] Checking for updates...
	I0425 11:50:12.317468    3911 config.go:182] Loaded profile config "ha-703000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 11:50:12.317482    3911 status.go:255] checking status of ha-703000 ...
	I0425 11:50:12.317818    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.317864    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.326386    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51425
	I0425 11:50:12.326720    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.327117    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.327126    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.327398    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.327514    3911 main.go:141] libmachine: (ha-703000) Calling .GetState
	I0425 11:50:12.327601    3911 main.go:141] libmachine: (ha-703000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 11:50:12.327682    3911 main.go:141] libmachine: (ha-703000) DBG | hyperkit pid from json: 3344
	I0425 11:50:12.328695    3911 status.go:330] ha-703000 host status = "Running" (err=<nil>)
	I0425 11:50:12.328711    3911 host.go:66] Checking if "ha-703000" exists ...
	I0425 11:50:12.328954    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.328978    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.337419    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51427
	I0425 11:50:12.337769    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.338149    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.338177    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.338384    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.338491    3911 main.go:141] libmachine: (ha-703000) Calling .GetIP
	I0425 11:50:12.338587    3911 host.go:66] Checking if "ha-703000" exists ...
	I0425 11:50:12.338880    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.338903    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.347925    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51429
	I0425 11:50:12.348242    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.348547    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.348557    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.348779    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.348898    3911 main.go:141] libmachine: (ha-703000) Calling .DriverName
	I0425 11:50:12.349038    3911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 11:50:12.349056    3911 main.go:141] libmachine: (ha-703000) Calling .GetSSHHostname
	I0425 11:50:12.349138    3911 main.go:141] libmachine: (ha-703000) Calling .GetSSHPort
	I0425 11:50:12.349218    3911 main.go:141] libmachine: (ha-703000) Calling .GetSSHKeyPath
	I0425 11:50:12.349300    3911 main.go:141] libmachine: (ha-703000) Calling .GetSSHUsername
	I0425 11:50:12.349404    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/ha-703000/id_rsa Username:docker}
	I0425 11:50:12.390384    3911 ssh_runner.go:195] Run: systemctl --version
	I0425 11:50:12.395226    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 11:50:12.406130    3911 kubeconfig.go:125] found "ha-703000" server: "https://192.169.0.254:8443"
	I0425 11:50:12.406155    3911 api_server.go:166] Checking apiserver status ...
	I0425 11:50:12.406192    3911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 11:50:12.417607    3911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1858/cgroup
	W0425 11:50:12.425084    3911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1858/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 11:50:12.425151    3911 ssh_runner.go:195] Run: ls
	I0425 11:50:12.428652    3911 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0425 11:50:12.431721    3911 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0425 11:50:12.431735    3911 status.go:422] ha-703000 apiserver status = Running (err=<nil>)
	I0425 11:50:12.431745    3911 status.go:257] ha-703000 status: &{Name:ha-703000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:50:12.431755    3911 status.go:255] checking status of ha-703000-m02 ...
	I0425 11:50:12.431999    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.432020    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.440811    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51433
	I0425 11:50:12.441182    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.441511    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.441521    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.441735    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.441847    3911 main.go:141] libmachine: (ha-703000-m02) Calling .GetState
	I0425 11:50:12.441927    3911 main.go:141] libmachine: (ha-703000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 11:50:12.441995    3911 main.go:141] libmachine: (ha-703000-m02) DBG | hyperkit pid from json: 3363
	I0425 11:50:12.443006    3911 main.go:141] libmachine: (ha-703000-m02) DBG | hyperkit pid 3363 missing from process table
	I0425 11:50:12.443058    3911 status.go:330] ha-703000-m02 host status = "Stopped" (err=<nil>)
	I0425 11:50:12.443069    3911 status.go:343] host is not running, skipping remaining checks
	I0425 11:50:12.443076    3911 status.go:257] ha-703000-m02 status: &{Name:ha-703000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:50:12.443088    3911 status.go:255] checking status of ha-703000-m03 ...
	I0425 11:50:12.443365    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.443391    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.452019    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51435
	I0425 11:50:12.452375    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.452743    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.452761    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.453037    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.453187    3911 main.go:141] libmachine: (ha-703000-m03) Calling .GetState
	I0425 11:50:12.453318    3911 main.go:141] libmachine: (ha-703000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 11:50:12.453401    3911 main.go:141] libmachine: (ha-703000-m03) DBG | hyperkit pid from json: 3391
	I0425 11:50:12.454699    3911 status.go:330] ha-703000-m03 host status = "Running" (err=<nil>)
	I0425 11:50:12.454710    3911 host.go:66] Checking if "ha-703000-m03" exists ...
	I0425 11:50:12.454962    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.454986    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.463546    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51437
	I0425 11:50:12.463921    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.464278    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.464296    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.464505    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.464616    3911 main.go:141] libmachine: (ha-703000-m03) Calling .GetIP
	I0425 11:50:12.464696    3911 host.go:66] Checking if "ha-703000-m03" exists ...
	I0425 11:50:12.464958    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.464980    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.473593    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51439
	I0425 11:50:12.473944    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.474326    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.474345    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.474568    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.474691    3911 main.go:141] libmachine: (ha-703000-m03) Calling .DriverName
	I0425 11:50:12.474827    3911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 11:50:12.474840    3911 main.go:141] libmachine: (ha-703000-m03) Calling .GetSSHHostname
	I0425 11:50:12.474942    3911 main.go:141] libmachine: (ha-703000-m03) Calling .GetSSHPort
	I0425 11:50:12.475021    3911 main.go:141] libmachine: (ha-703000-m03) Calling .GetSSHKeyPath
	I0425 11:50:12.475106    3911 main.go:141] libmachine: (ha-703000-m03) Calling .GetSSHUsername
	I0425 11:50:12.475190    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/ha-703000-m03/id_rsa Username:docker}
	I0425 11:50:12.509940    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 11:50:12.521350    3911 kubeconfig.go:125] found "ha-703000" server: "https://192.169.0.254:8443"
	I0425 11:50:12.521364    3911 api_server.go:166] Checking apiserver status ...
	I0425 11:50:12.521400    3911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 11:50:12.533248    3911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1956/cgroup
	W0425 11:50:12.541304    3911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1956/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 11:50:12.541395    3911 ssh_runner.go:195] Run: ls
	I0425 11:50:12.545061    3911 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0425 11:50:12.548113    3911 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0425 11:50:12.548125    3911 status.go:422] ha-703000-m03 apiserver status = Running (err=<nil>)
	I0425 11:50:12.548133    3911 status.go:257] ha-703000-m03 status: &{Name:ha-703000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 11:50:12.548143    3911 status.go:255] checking status of ha-703000-m04 ...
	I0425 11:50:12.548404    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.548423    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.557245    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51443
	I0425 11:50:12.557602    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.557936    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.557950    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.558177    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.558287    3911 main.go:141] libmachine: (ha-703000-m04) Calling .GetState
	I0425 11:50:12.558377    3911 main.go:141] libmachine: (ha-703000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 11:50:12.558460    3911 main.go:141] libmachine: (ha-703000-m04) DBG | hyperkit pid from json: 3491
	I0425 11:50:12.559490    3911 status.go:330] ha-703000-m04 host status = "Running" (err=<nil>)
	I0425 11:50:12.559499    3911 host.go:66] Checking if "ha-703000-m04" exists ...
	I0425 11:50:12.559759    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.559780    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.568307    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51445
	I0425 11:50:12.568650    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.569010    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.569028    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.569256    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.569361    3911 main.go:141] libmachine: (ha-703000-m04) Calling .GetIP
	I0425 11:50:12.569437    3911 host.go:66] Checking if "ha-703000-m04" exists ...
	I0425 11:50:12.569684    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 11:50:12.569714    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 11:50:12.578455    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51447
	I0425 11:50:12.578949    3911 main.go:141] libmachine: () Calling .GetVersion
	I0425 11:50:12.579337    3911 main.go:141] libmachine: Using API Version  1
	I0425 11:50:12.579349    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 11:50:12.579666    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0425 11:50:12.579789    3911 main.go:141] libmachine: (ha-703000-m04) Calling .DriverName
	I0425 11:50:12.579912    3911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 11:50:12.579924    3911 main.go:141] libmachine: (ha-703000-m04) Calling .GetSSHHostname
	I0425 11:50:12.580003    3911 main.go:141] libmachine: (ha-703000-m04) Calling .GetSSHPort
	I0425 11:50:12.580074    3911 main.go:141] libmachine: (ha-703000-m04) Calling .GetSSHKeyPath
	I0425 11:50:12.580161    3911 main.go:141] libmachine: (ha-703000-m04) Calling .GetSSHUsername
	I0425 11:50:12.580256    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/ha-703000-m04/id_rsa Username:docker}
	I0425 11:50:12.608906    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 11:50:12.620132    3911 status.go:257] ha-703000-m04 status: &{Name:ha-703000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (162.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 node start m02 -v=7 --alsologtostderr
E0425 11:52:26.095032    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 11:52:53.787767    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-703000 node start m02 -v=7 --alsologtostderr: (2m42.300679987s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (162.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (324.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-703000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-703000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-703000 -v=7 --alsologtostderr: (27.151080068s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-703000 --wait=true -v=7 --alsologtostderr
E0425 11:53:34.104938    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:54:57.164270    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 11:57:26.097972    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-703000 --wait=true -v=7 --alsologtostderr: (4m57.088200086s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-703000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (324.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-703000 node delete m03 -v=7 --alsologtostderr: (7.774345092s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (249.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 stop -v=7 --alsologtostderr
E0425 12:02:26.102919    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 12:03:34.111042    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
E0425 12:03:49.156221    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-703000 stop -v=7 --alsologtostderr: (4m9.439940455s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr: exit status 7 (103.443149ms)

                                                
                                                
-- stdout --
	ha-703000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-703000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-703000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:06:25.847529    4344 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:06:25.847746    4344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:06:25.847752    4344 out.go:304] Setting ErrFile to fd 2...
	I0425 12:06:25.847756    4344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:06:25.847930    4344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:06:25.848130    4344 out.go:298] Setting JSON to false
	I0425 12:06:25.848157    4344 mustload.go:65] Loading cluster: ha-703000
	I0425 12:06:25.848194    4344 notify.go:220] Checking for updates...
	I0425 12:06:25.849459    4344 config.go:182] Loaded profile config "ha-703000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:06:25.849479    4344 status.go:255] checking status of ha-703000 ...
	I0425 12:06:25.849846    4344 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:06:25.849889    4344 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:06:25.858640    4344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51796
	I0425 12:06:25.858999    4344 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:06:25.859391    4344 main.go:141] libmachine: Using API Version  1
	I0425 12:06:25.859407    4344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:06:25.859619    4344 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:06:25.859732    4344 main.go:141] libmachine: (ha-703000) Calling .GetState
	I0425 12:06:25.859811    4344 main.go:141] libmachine: (ha-703000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:06:25.859878    4344 main.go:141] libmachine: (ha-703000) DBG | hyperkit pid from json: 4036
	I0425 12:06:25.860767    4344 main.go:141] libmachine: (ha-703000) DBG | hyperkit pid 4036 missing from process table
	I0425 12:06:25.860826    4344 status.go:330] ha-703000 host status = "Stopped" (err=<nil>)
	I0425 12:06:25.860840    4344 status.go:343] host is not running, skipping remaining checks
	I0425 12:06:25.860846    4344 status.go:257] ha-703000 status: &{Name:ha-703000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:06:25.860871    4344 status.go:255] checking status of ha-703000-m02 ...
	I0425 12:06:25.861165    4344 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:06:25.861187    4344 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:06:25.869340    4344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51798
	I0425 12:06:25.869647    4344 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:06:25.870041    4344 main.go:141] libmachine: Using API Version  1
	I0425 12:06:25.870067    4344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:06:25.870297    4344 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:06:25.870409    4344 main.go:141] libmachine: (ha-703000-m02) Calling .GetState
	I0425 12:06:25.870490    4344 main.go:141] libmachine: (ha-703000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:06:25.870571    4344 main.go:141] libmachine: (ha-703000-m02) DBG | hyperkit pid from json: 4048
	I0425 12:06:25.876886    4344 status.go:330] ha-703000-m02 host status = "Stopped" (err=<nil>)
	I0425 12:06:25.876887    4344 main.go:141] libmachine: (ha-703000-m02) DBG | hyperkit pid 4048 missing from process table
	I0425 12:06:25.876896    4344 status.go:343] host is not running, skipping remaining checks
	I0425 12:06:25.876903    4344 status.go:257] ha-703000-m02 status: &{Name:ha-703000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:06:25.876925    4344 status.go:255] checking status of ha-703000-m04 ...
	I0425 12:06:25.877189    4344 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:06:25.877225    4344 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:06:25.885628    4344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51800
	I0425 12:06:25.885956    4344 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:06:25.886394    4344 main.go:141] libmachine: Using API Version  1
	I0425 12:06:25.886409    4344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:06:25.886641    4344 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:06:25.886768    4344 main.go:141] libmachine: (ha-703000-m04) Calling .GetState
	I0425 12:06:25.886857    4344 main.go:141] libmachine: (ha-703000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:06:25.886939    4344 main.go:141] libmachine: (ha-703000-m04) DBG | hyperkit pid from json: 4106
	I0425 12:06:25.887830    4344 main.go:141] libmachine: (ha-703000-m04) DBG | hyperkit pid 4106 missing from process table
	I0425 12:06:25.887860    4344 status.go:330] ha-703000-m04 host status = "Stopped" (err=<nil>)
	I0425 12:06:25.887871    4344 status.go:343] host is not running, skipping remaining checks
	I0425 12:06:25.887878    4344 status.go:257] ha-703000-m04 status: &{Name:ha-703000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (249.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (118.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-703000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0425 12:07:26.093638    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-703000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (1m58.221553357s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-703000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (118.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-379000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0425 12:18:34.112873    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-379000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (51.152243925s)
--- PASS: TestJSONOutput/start/Command (51.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-379000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-379000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-379000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-379000 --output=json --user=testUser: (8.342047083s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-217000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-217000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (408.873678ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b7a6a9c-8cfb-4f6a-965d-8ade0a668606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-217000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ba1c31e-e9d5-4c76-97eb-41d117dadd5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18757"}}
	{"specversion":"1.0","id":"2cb7c78c-3516-4dc2-ae81-a2cf1a16325b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig"}}
	{"specversion":"1.0","id":"a45411cf-1de5-413e-ab13-66495951a6c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"128126aa-18ae-4639-8888-5ef531732f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"899b22da-aaff-4a1f-885b-58e9335e1214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube"}}
	{"specversion":"1.0","id":"52e1efad-b3a3-4d29-a279-725ef023ea96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a64ea50-0885-48a6-823e-cfb5e66458bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-217000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-217000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (206.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-032000 --driver=hyperkit 
E0425 12:20:29.160412    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-032000 --driver=hyperkit : (2m36.403624564s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-035000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-035000 --driver=hyperkit : (39.999267933s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-032000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-035000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-035000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-035000: (3.42334326s)
helpers_test.go:175: Cleaning up "first-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-032000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-032000: (5.289606504s)
--- PASS: TestMinikubeProfile (206.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (18.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-237000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0425 12:22:26.170921    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-237000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (17.187739554s)
--- PASS: TestMountStart/serial/StartWithMountFirst (18.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-237000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-237000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-249000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-249000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (17.520400701s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-249000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-249000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-237000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-237000 --alsologtostderr -v=5: (2.377264135s)
--- PASS: TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-249000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-249000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-249000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-249000: (2.401789348s)
--- PASS: TestMountStart/serial/Stop (2.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-249000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-249000: (17.745284179s)
--- PASS: TestMountStart/serial/RestartStopped (18.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-249000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-249000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-034000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0425 12:23:34.180934    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-034000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m40.107841115s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-034000 -- rollout status deployment/busybox: (2.657882602s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-hkq6z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-mw494 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-hkq6z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-mw494 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-hkq6z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-mw494 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-hkq6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-hkq6z -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-mw494 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-034000 -- exec busybox-fc5497c4f-mw494 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-034000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-034000 -v 3 --alsologtostderr: (36.751035583s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-034000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp testdata/cp-test.txt multinode-034000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1757473431/001/cp-test_multinode-034000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000:/home/docker/cp-test.txt multinode-034000-m02:/home/docker/cp-test_multinode-034000_multinode-034000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m02 "sudo cat /home/docker/cp-test_multinode-034000_multinode-034000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000:/home/docker/cp-test.txt multinode-034000-m03:/home/docker/cp-test_multinode-034000_multinode-034000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m03 "sudo cat /home/docker/cp-test_multinode-034000_multinode-034000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp testdata/cp-test.txt multinode-034000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1757473431/001/cp-test_multinode-034000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt multinode-034000:/home/docker/cp-test_multinode-034000-m02_multinode-034000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000 "sudo cat /home/docker/cp-test_multinode-034000-m02_multinode-034000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000-m02:/home/docker/cp-test.txt multinode-034000-m03:/home/docker/cp-test_multinode-034000-m02_multinode-034000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m03 "sudo cat /home/docker/cp-test_multinode-034000-m02_multinode-034000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp testdata/cp-test.txt multinode-034000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1757473431/001/cp-test_multinode-034000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt multinode-034000:/home/docker/cp-test_multinode-034000-m03_multinode-034000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000 "sudo cat /home/docker/cp-test_multinode-034000-m03_multinode-034000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 cp multinode-034000-m03:/home/docker/cp-test.txt multinode-034000-m02:/home/docker/cp-test_multinode-034000-m03_multinode-034000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 ssh -n multinode-034000-m02 "sudo cat /home/docker/cp-test_multinode-034000-m03_multinode-034000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-034000 node stop m03: (2.363118525s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status: exit status 7 (257.160806ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-034000 status --alsologtostderr: exit status 7 (260.851577ms)

                                                
                                                
-- stdout --
	multinode-034000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-034000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-034000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 12:25:56.656860    5596 out.go:291] Setting OutFile to fd 1 ...
	I0425 12:25:56.657053    5596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:25:56.657058    5596 out.go:304] Setting ErrFile to fd 2...
	I0425 12:25:56.657062    5596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 12:25:56.657242    5596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18757-1425/.minikube/bin
	I0425 12:25:56.657416    5596 out.go:298] Setting JSON to false
	I0425 12:25:56.657438    5596 mustload.go:65] Loading cluster: multinode-034000
	I0425 12:25:56.657475    5596 notify.go:220] Checking for updates...
	I0425 12:25:56.658911    5596 config.go:182] Loaded profile config "multinode-034000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0425 12:25:56.658933    5596 status.go:255] checking status of multinode-034000 ...
	I0425 12:25:56.659285    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.659326    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.668488    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52787
	I0425 12:25:56.668881    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.669268    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.669278    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.669476    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.669581    5596 main.go:141] libmachine: (multinode-034000) Calling .GetState
	I0425 12:25:56.669663    5596 main.go:141] libmachine: (multinode-034000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:56.669731    5596 main.go:141] libmachine: (multinode-034000) DBG | hyperkit pid from json: 5283
	I0425 12:25:56.670905    5596 status.go:330] multinode-034000 host status = "Running" (err=<nil>)
	I0425 12:25:56.670921    5596 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:25:56.671162    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.671185    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.679628    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52789
	I0425 12:25:56.679967    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.680350    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.680367    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.680550    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.680664    5596 main.go:141] libmachine: (multinode-034000) Calling .GetIP
	I0425 12:25:56.680752    5596 host.go:66] Checking if "multinode-034000" exists ...
	I0425 12:25:56.681003    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.681027    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.693166    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52791
	I0425 12:25:56.693522    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.693844    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.693853    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.694108    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.694243    5596 main.go:141] libmachine: (multinode-034000) Calling .DriverName
	I0425 12:25:56.694381    5596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:25:56.694401    5596 main.go:141] libmachine: (multinode-034000) Calling .GetSSHHostname
	I0425 12:25:56.694477    5596 main.go:141] libmachine: (multinode-034000) Calling .GetSSHPort
	I0425 12:25:56.694556    5596 main.go:141] libmachine: (multinode-034000) Calling .GetSSHKeyPath
	I0425 12:25:56.694640    5596 main.go:141] libmachine: (multinode-034000) Calling .GetSSHUsername
	I0425 12:25:56.694719    5596 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000/id_rsa Username:docker}
	I0425 12:25:56.727550    5596 ssh_runner.go:195] Run: systemctl --version
	I0425 12:25:56.731785    5596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:25:56.742789    5596 kubeconfig.go:125] found "multinode-034000" server: "https://192.169.0.16:8443"
	I0425 12:25:56.742812    5596 api_server.go:166] Checking apiserver status ...
	I0425 12:25:56.742848    5596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 12:25:56.753927    5596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0425 12:25:56.761144    5596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 12:25:56.761185    5596 ssh_runner.go:195] Run: ls
	I0425 12:25:56.768541    5596 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0425 12:25:56.771473    5596 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0425 12:25:56.771486    5596 status.go:422] multinode-034000 apiserver status = Running (err=<nil>)
	I0425 12:25:56.771498    5596 status.go:257] multinode-034000 status: &{Name:multinode-034000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:25:56.771512    5596 status.go:255] checking status of multinode-034000-m02 ...
	I0425 12:25:56.771777    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.771796    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.780260    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52795
	I0425 12:25:56.780590    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.780945    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.780961    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.781181    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.781300    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .GetState
	I0425 12:25:56.781387    5596 main.go:141] libmachine: (multinode-034000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:56.781463    5596 main.go:141] libmachine: (multinode-034000-m02) DBG | hyperkit pid from json: 5309
	I0425 12:25:56.782643    5596 status.go:330] multinode-034000-m02 host status = "Running" (err=<nil>)
	I0425 12:25:56.782651    5596 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:25:56.782892    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.782914    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.791330    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52797
	I0425 12:25:56.791677    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.791990    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.792001    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.792224    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.792345    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .GetIP
	I0425 12:25:56.792429    5596 host.go:66] Checking if "multinode-034000-m02" exists ...
	I0425 12:25:56.792690    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.792713    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.801192    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52799
	I0425 12:25:56.801513    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.801809    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.801819    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.802017    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.802128    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .DriverName
	I0425 12:25:56.802246    5596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 12:25:56.802257    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHHostname
	I0425 12:25:56.802341    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHPort
	I0425 12:25:56.802422    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHKeyPath
	I0425 12:25:56.802499    5596 main.go:141] libmachine: (multinode-034000-m02) Calling .GetSSHUsername
	I0425 12:25:56.802576    5596 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18757-1425/.minikube/machines/multinode-034000-m02/id_rsa Username:docker}
	I0425 12:25:56.832856    5596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 12:25:56.844334    5596 status.go:257] multinode-034000-m02 status: &{Name:multinode-034000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 12:25:56.844351    5596 status.go:255] checking status of multinode-034000-m03 ...
	I0425 12:25:56.844629    5596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0425 12:25:56.844653    5596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0425 12:25:56.853555    5596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52802
	I0425 12:25:56.853900    5596 main.go:141] libmachine: () Calling .GetVersion
	I0425 12:25:56.854227    5596 main.go:141] libmachine: Using API Version  1
	I0425 12:25:56.854238    5596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 12:25:56.854447    5596 main.go:141] libmachine: () Calling .GetMachineName
	I0425 12:25:56.854553    5596 main.go:141] libmachine: (multinode-034000-m03) Calling .GetState
	I0425 12:25:56.854645    5596 main.go:141] libmachine: (multinode-034000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0425 12:25:56.854709    5596 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid from json: 5383
	I0425 12:25:56.855877    5596 main.go:141] libmachine: (multinode-034000-m03) DBG | hyperkit pid 5383 missing from process table
	I0425 12:25:56.855918    5596 status.go:330] multinode-034000-m03 host status = "Stopped" (err=<nil>)
	I0425 12:25:56.855930    5596 status.go:343] host is not running, skipping remaining checks
	I0425 12:25:56.855937    5596 status.go:257] multinode-034000-m03 status: &{Name:multinode-034000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-034000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-034000-m03 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-034000-m03 --driver=hyperkit : exit status 14 (477.188992ms)

                                                
                                                
-- stdout --
	* [multinode-034000-m03] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-034000-m03' is duplicated with machine name 'multinode-034000-m03' in profile 'multinode-034000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-034000-m04 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-034000-m04 --driver=hyperkit : (40.751949051s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-034000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-034000: exit status 80 (274.717804ms)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster multinode-034000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-034000-m04 already exists in multinode-034000-m04 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-034000-m04
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-034000-m04: (7.542943637s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.11s)

                                                
                                    
x
+
TestPreload (160.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-530000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0425 12:37:09.254197    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 12:37:26.187533    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-530000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m13.698431512s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-530000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-530000 image pull gcr.io/k8s-minikube/busybox: (1.334238236s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-530000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-530000: (8.420532973s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-530000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0425 12:38:34.177929    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-530000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m11.725407161s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-530000 image list
helpers_test.go:175: Cleaning up "test-preload-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-530000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-530000: (5.27356257s)
--- PASS: TestPreload (160.61s)

                                                
                                    
x
+
TestScheduledStopUnix (109.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-144000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-144000 --memory=2048 --driver=hyperkit : (38.052244397s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-144000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-144000 -n scheduled-stop-144000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-144000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-144000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-144000 -n scheduled-stop-144000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-144000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-144000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-144000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-144000: exit status 7 (83.342785ms)

                                                
                                                
-- stdout --
	scheduled-stop-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-144000 -n scheduled-stop-144000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-144000 -n scheduled-stop-144000: exit status 7 (74.804214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-144000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-144000
--- PASS: TestScheduledStopUnix (109.65s)

                                                
                                    
x
+
TestSkaffold (233.35s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3378715650 version
skaffold_test.go:59: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3378715650 version: (1.480121546s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-285000 --memory=2600 --driver=hyperkit 
E0425 12:42:26.171620    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
E0425 12:43:34.180789    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-285000 --memory=2600 --driver=hyperkit : (2m37.014181276s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3378715650 run --minikube-profile skaffold-285000 --kube-context skaffold-285000 --status-check=true --port-forward=false --interactive=false
E0425 12:44:57.245892    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3378715650 run --minikube-profile skaffold-285000 --kube-context skaffold-285000 --status-check=true --port-forward=false --interactive=false: (56.2360385s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-98564b9-ph6k7" [63aaee08-fe0b-4a51-9bc5-19c0db712aa4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004516478s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f9bd857d6-s4vbz" [eae0af9e-2ef0-4ab0-a7f4-f3812fada9ca] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004779291s
helpers_test.go:175: Cleaning up "skaffold-285000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-285000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-285000: (5.27068849s)
--- PASS: TestSkaffold (233.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2890953645 start -p running-upgrade-439000 --memory=2200 --vm-driver=hyperkit 
E0425 12:48:34.186171    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.2890953645 start -p running-upgrade-439000 --memory=2200 --vm-driver=hyperkit : (54.483639118s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-439000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-439000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (23.02577183s)
helpers_test.go:175: Cleaning up "running-upgrade-439000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-439000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-439000: (5.343013893s)
--- PASS: TestRunningBinaryUpgrade (84.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (116.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0425 12:49:57.681626    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:57.687287    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:57.697455    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:57.717633    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:57.759339    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:57.840097    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:58.001131    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:58.321729    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:49:58.962776    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:50:00.243093    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:50:02.803798    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
E0425 12:50:07.923986    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (50.917207221s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-836000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-836000: (2.385996587s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-836000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-836000 status --format={{.Host}}: exit status 7 (75.193913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
E0425 12:50:18.165846    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (33.105464393s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-836000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (583.568999ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-836000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-836000
	    minikube start -p kubernetes-upgrade-836000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8360002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-836000 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-836000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (23.998532395s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-836000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-836000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-836000: (5.568323225s)
--- PASS: TestKubernetesUpgrade (116.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18757
- KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1726010748/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1726010748/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1726010748/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1726010748/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.06s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.73s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18757
- KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2159189966/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2159189966/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2159189966/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2159189966/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.1833586151 start -p stopped-upgrade-659000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.1833586151 start -p stopped-upgrade-659000 --memory=2200 --vm-driver=hyperkit : (44.135883562s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.1833586151 -p stopped-upgrade-659000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.1833586151 -p stopped-upgrade-659000 stop: (8.259837167s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-659000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-659000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (34.830056522s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.23s)

                                                
                                    
x
+
TestPause/serial/Start (63.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-892000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0425 12:51:19.608241    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-892000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m3.023508683s)
--- PASS: TestPause/serial/Start (63.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-892000 --alsologtostderr -v=1 --driver=hyperkit 
E0425 12:52:26.258437    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-892000 --alsologtostderr -v=1 --driver=hyperkit : (37.422875574s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-659000
E0425 12:52:41.609156    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/skaffold-285000/client.crt: no such file or directory
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-659000: (3.451364624s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-005000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-005000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (626.422007ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-005000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18757
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18757-1425/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18757-1425/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-005000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-005000 --driver=hyperkit : (40.256836414s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-005000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-892000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-892000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-892000 --output=json --layout=cluster: exit status 2 (168.657765ms)

                                                
                                                
-- stdout --
	{"Name":"pause-892000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-892000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.17s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-892000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-892000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-892000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-892000 --alsologtostderr -v=5: (5.283503508s)
--- PASS: TestPause/serial/DeletePaused (5.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-005000 --no-kubernetes --driver=hyperkit 
E0425 12:53:34.270204    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/addons-504000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-005000 --no-kubernetes --driver=hyperkit : (14.748680164s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-005000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-005000 status -o json: exit status 2 (185.431735ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-005000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-005000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-005000: (2.449999676s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-005000 --no-kubernetes --driver=hyperkit 
E0425 12:53:49.318654    1885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18757-1425/.minikube/profiles/functional-380000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-005000 --no-kubernetes --driver=hyperkit : (20.950350986s)
--- PASS: TestNoKubernetes/serial/Start (20.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-005000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-005000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (140.650886ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-005000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-005000: (2.392206718s)
--- PASS: TestNoKubernetes/serial/Stop (2.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-005000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-005000 --driver=hyperkit : (19.240354079s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-005000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-005000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (133.109781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    

Test skip (17/227)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard