=== RUN TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run: out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr --driver=kvm2
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m21.256349954s)
-- stdout --
* [multinode-391061] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17486
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node multinode-391061 in cluster multinode-391061
* Restarting existing kvm2 VM for "multinode-391061" ...
* Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
* Configuring CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Starting worker node multinode-391061-m02 in cluster multinode-391061
* Restarting existing kvm2 VM for "multinode-391061-m02" ...
* Found network options:
- NO_PROXY=192.168.39.43
-- /stdout --
** stderr **
I1101 00:08:49.696747 30593 out.go:296] Setting OutFile to fd 1 ...
I1101 00:08:49.696976 30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:08:49.696984 30593 out.go:309] Setting ErrFile to fd 2...
I1101 00:08:49.696989 30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:08:49.697199 30593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1101 00:08:49.697724 30593 out.go:303] Setting JSON to false
I1101 00:08:49.698581 30593 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3079,"bootTime":1698794251,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 00:08:49.698643 30593 start.go:138] virtualization: kvm guest
I1101 00:08:49.701257 30593 out.go:177] * [multinode-391061] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
I1101 00:08:49.702839 30593 out.go:177] - MINIKUBE_LOCATION=17486
I1101 00:08:49.702844 30593 notify.go:220] Checking for updates...
I1101 00:08:49.704612 30593 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 00:08:49.706320 30593 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:08:49.707852 30593 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
I1101 00:08:49.709325 30593 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 00:08:49.710727 30593 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1101 00:08:49.712746 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:08:49.713116 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:08:49.713162 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:08:49.727252 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
I1101 00:08:49.727584 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:08:49.728056 30593 main.go:141] libmachine: Using API Version 1
I1101 00:08:49.728075 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:08:49.728412 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:08:49.728601 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:08:49.728809 30593 driver.go:378] Setting default libvirt URI to qemu:///system
I1101 00:08:49.729119 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:08:49.729158 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:08:49.742929 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
I1101 00:08:49.743302 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:08:49.743756 30593 main.go:141] libmachine: Using API Version 1
I1101 00:08:49.743779 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:08:49.744063 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:08:49.744234 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:08:49.779391 30593 out.go:177] * Using the kvm2 driver based on existing profile
I1101 00:08:49.780999 30593 start.go:298] selected driver: kvm2
I1101 00:08:49.781015 30593 start.go:902] validating driver "kvm2" against &{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false k
ubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1101 00:08:49.781172 30593 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 00:08:49.781470 30593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 00:08:49.781541 30593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7251/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1101 00:08:49.796518 30593 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
I1101 00:08:49.797197 30593 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 00:08:49.797254 30593 cni.go:84] Creating CNI manager for ""
I1101 00:08:49.797263 30593 cni.go:136] 2 nodes found, recommending kindnet
I1101 00:08:49.797274 30593 start_flags.go:323] config:
{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1101 00:08:49.797449 30593 iso.go:125] acquiring lock: {Name:mk56e0e42e3cb427bae1fd4521b75db693021ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 00:08:49.799445 30593 out.go:177] * Starting control plane node multinode-391061 in cluster multinode-391061
I1101 00:08:49.802107 30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I1101 00:08:49.802154 30593 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
I1101 00:08:49.802163 30593 cache.go:56] Caching tarball of preloaded images
I1101 00:08:49.802239 30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1101 00:08:49.802251 30593 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker
I1101 00:08:49.802383 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:08:49.802605 30593 start.go:365] acquiring machines lock for multinode-391061: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 00:08:49.802660 30593 start.go:369] acquired machines lock for "multinode-391061" in 32.142µs
I1101 00:08:49.802683 30593 start.go:96] Skipping create...Using existing machine configuration
I1101 00:08:49.802692 30593 fix.go:54] fixHost starting:
I1101 00:08:49.802950 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:08:49.802988 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:08:49.817041 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
I1101 00:08:49.817426 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:08:49.817852 30593 main.go:141] libmachine: Using API Version 1
I1101 00:08:49.817876 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:08:49.818147 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:08:49.818268 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:08:49.818364 30593 main.go:141] libmachine: (multinode-391061) Calling .GetState
I1101 00:08:49.819780 30593 fix.go:102] recreateIfNeeded on multinode-391061: state=Stopped err=<nil>
I1101 00:08:49.819798 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
W1101 00:08:49.819945 30593 fix.go:128] unexpected machine state, will restart: <nil>
I1101 00:08:49.822198 30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061" ...
I1101 00:08:49.823675 30593 main.go:141] libmachine: (multinode-391061) Calling .Start
I1101 00:08:49.823836 30593 main.go:141] libmachine: (multinode-391061) Ensuring networks are active...
I1101 00:08:49.824527 30593 main.go:141] libmachine: (multinode-391061) Ensuring network default is active
I1101 00:08:49.824903 30593 main.go:141] libmachine: (multinode-391061) Ensuring network mk-multinode-391061 is active
I1101 00:08:49.825231 30593 main.go:141] libmachine: (multinode-391061) Getting domain xml...
I1101 00:08:49.825825 30593 main.go:141] libmachine: (multinode-391061) Creating domain...
I1101 00:08:51.072133 30593 main.go:141] libmachine: (multinode-391061) Waiting to get IP...
I1101 00:08:51.072978 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.073561 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.073673 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.073534 30629 retry.go:31] will retry after 229.675258ms: waiting for machine to come up
I1101 00:08:51.305068 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.305486 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.305513 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.305442 30629 retry.go:31] will retry after 372.862383ms: waiting for machine to come up
I1101 00:08:51.680135 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.680628 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.680663 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.680610 30629 retry.go:31] will retry after 314.755115ms: waiting for machine to come up
I1101 00:08:51.997095 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.997485 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.997516 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.997452 30629 retry.go:31] will retry after 376.70772ms: waiting for machine to come up
I1101 00:08:52.376191 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:52.376728 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:52.376768 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.376689 30629 retry.go:31] will retry after 583.291159ms: waiting for machine to come up
I1101 00:08:52.961471 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:52.961889 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:52.961920 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.961826 30629 retry.go:31] will retry after 803.566491ms: waiting for machine to come up
I1101 00:08:53.766791 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:53.767211 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:53.767251 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:53.767153 30629 retry.go:31] will retry after 1.032833525s: waiting for machine to come up
I1101 00:08:54.801328 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:54.801700 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:54.801734 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:54.801656 30629 retry.go:31] will retry after 1.044435025s: waiting for machine to come up
I1101 00:08:55.847409 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:55.847850 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:55.847874 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:55.847797 30629 retry.go:31] will retry after 1.41464542s: waiting for machine to come up
I1101 00:08:57.264298 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:57.264621 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:57.264658 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:57.264585 30629 retry.go:31] will retry after 1.783339985s: waiting for machine to come up
I1101 00:08:59.050737 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:59.051258 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:59.051280 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:59.051209 30629 retry.go:31] will retry after 2.24727828s: waiting for machine to come up
I1101 00:09:01.300675 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:01.301123 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:09:01.301147 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:01.301080 30629 retry.go:31] will retry after 2.659318668s: waiting for machine to come up
I1101 00:09:03.964050 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:03.964412 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:09:03.964433 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:03.964369 30629 retry.go:31] will retry after 4.002549509s: waiting for machine to come up
I1101 00:09:07.970570 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.970947 30593 main.go:141] libmachine: (multinode-391061) Found IP for machine: 192.168.39.43
I1101 00:09:07.970973 30593 main.go:141] libmachine: (multinode-391061) Reserving static IP address...
I1101 00:09:07.970988 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.971417 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:07.971446 30593 main.go:141] libmachine: (multinode-391061) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"}
I1101 00:09:07.971454 30593 main.go:141] libmachine: (multinode-391061) Reserved static IP address: 192.168.39.43
I1101 00:09:07.971463 30593 main.go:141] libmachine: (multinode-391061) Waiting for SSH to be available...
I1101 00:09:07.971472 30593 main.go:141] libmachine: (multinode-391061) DBG | Getting to WaitForSSH function...
I1101 00:09:07.973244 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.973598 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:07.973629 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.973785 30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH client type: external
I1101 00:09:07.973815 30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa (-rw-------)
I1101 00:09:07.973859 30593 main.go:141] libmachine: (multinode-391061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa -p 22] /usr/bin/ssh <nil>}
I1101 00:09:07.973884 30593 main.go:141] libmachine: (multinode-391061) DBG | About to run SSH command:
I1101 00:09:07.973895 30593 main.go:141] libmachine: (multinode-391061) DBG | exit 0
I1101 00:09:08.070105 30593 main.go:141] libmachine: (multinode-391061) DBG | SSH cmd err, output: <nil>:
I1101 00:09:08.070483 30593 main.go:141] libmachine: (multinode-391061) Calling .GetConfigRaw
I1101 00:09:08.071216 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:08.073614 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.074025 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.074060 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.074285 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:09:08.074479 30593 machine.go:88] provisioning docker machine ...
I1101 00:09:08.074512 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:08.074714 30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
I1101 00:09:08.074856 30593 buildroot.go:166] provisioning hostname "multinode-391061"
I1101 00:09:08.074870 30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
I1101 00:09:08.074990 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.077098 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.077410 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.077452 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.077575 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.077739 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.077899 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.078007 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.078153 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.078494 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.078529 30593 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-391061 && echo "multinode-391061" | sudo tee /etc/hostname
I1101 00:09:08.217944 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061
I1101 00:09:08.217967 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.220671 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.220963 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.221024 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.221089 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.221295 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.221466 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.221616 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.221803 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.222253 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.222280 30593 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-391061' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061/g' /etc/hosts;
else
echo '127.0.1.1 multinode-391061' | sudo tee -a /etc/hosts;
fi
fi
I1101 00:09:08.359049 30593 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1101 00:09:08.359078 30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
I1101 00:09:08.359096 30593 buildroot.go:174] setting up certificates
I1101 00:09:08.359104 30593 provision.go:83] configureAuth start
I1101 00:09:08.359112 30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
I1101 00:09:08.359381 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:08.361931 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.362234 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.362269 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.362374 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.364658 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.364936 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.364968 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.365105 30593 provision.go:138] copyHostCerts
I1101 00:09:08.365133 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:09:08.365172 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
I1101 00:09:08.365183 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:09:08.365248 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
I1101 00:09:08.365344 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:09:08.365365 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
I1101 00:09:08.365372 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:09:08.365399 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
I1101 00:09:08.365452 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:09:08.365467 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
I1101 00:09:08.365473 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:09:08.365494 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
I1101 00:09:08.365549 30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061 san=[192.168.39.43 192.168.39.43 localhost 127.0.0.1 minikube multinode-391061]
I1101 00:09:08.497882 30593 provision.go:172] copyRemoteCerts
I1101 00:09:08.497940 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 00:09:08.497965 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.500598 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.500931 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.500961 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.501176 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.501356 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.501513 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.501639 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:08.594935 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1101 00:09:08.594993 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 00:09:08.617737 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1101 00:09:08.617835 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 00:09:08.639923 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
I1101 00:09:08.640003 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1101 00:09:08.662129 30593 provision.go:86] duration metric: configureAuth took 303.015088ms
I1101 00:09:08.662155 30593 buildroot.go:189] setting minikube options for container-runtime
I1101 00:09:08.662403 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:09:08.662426 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:08.662704 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.665367 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.665756 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.665781 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.665918 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.666128 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.666300 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.666449 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.666613 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.666928 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.666940 30593 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 00:09:08.795906 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1101 00:09:08.795936 30593 buildroot.go:70] root file system type: tmpfs
I1101 00:09:08.796096 30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 00:09:08.796134 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.798879 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.799232 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.799265 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.799423 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.799598 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.799753 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.799868 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.800041 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.800361 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.800421 30593 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 00:09:08.942805 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 00:09:08.942844 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.945908 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.946293 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.946326 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.946513 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.946689 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.946882 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.947001 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.947184 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.947647 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.947681 30593 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 00:09:09.848694 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1101 00:09:09.848722 30593 machine.go:91] provisioned docker machine in 1.774228913s
I1101 00:09:09.848735 30593 start.go:300] post-start starting for "multinode-391061" (driver="kvm2")
I1101 00:09:09.848748 30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 00:09:09.848772 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:09.849087 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 00:09:09.849113 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:09.851810 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.852197 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:09.852243 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.852386 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:09.852556 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:09.852728 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:09.852822 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:09.947639 30593 ssh_runner.go:195] Run: cat /etc/os-release
I1101 00:09:09.951509 30593 command_runner.go:130] > NAME=Buildroot
I1101 00:09:09.951530 30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
I1101 00:09:09.951535 30593 command_runner.go:130] > ID=buildroot
I1101 00:09:09.951542 30593 command_runner.go:130] > VERSION_ID=2021.02.12
I1101 00:09:09.951549 30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1101 00:09:09.951586 30593 info.go:137] Remote host: Buildroot 2021.02.12
I1101 00:09:09.951598 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
I1101 00:09:09.951663 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
I1101 00:09:09.951768 30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
I1101 00:09:09.951785 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
I1101 00:09:09.951898 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 00:09:09.959594 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
I1101 00:09:09.981962 30593 start.go:303] post-start completed in 133.213964ms
I1101 00:09:09.982003 30593 fix.go:56] fixHost completed within 20.179294964s
I1101 00:09:09.982027 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:09.984776 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.985223 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:09.985252 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.985386 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:09.985595 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:09.985729 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:09.985860 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:09.985979 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:09.986435 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:09.986451 30593 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1101 00:09:10.119733 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797350.071514552
I1101 00:09:10.119761 30593 fix.go:206] guest clock: 1698797350.071514552
I1101 00:09:10.119769 30593 fix.go:219] Guest: 2023-11-01 00:09:10.071514552 +0000 UTC Remote: 2023-11-01 00:09:09.982007618 +0000 UTC m=+20.332511469 (delta=89.506934ms)
I1101 00:09:10.119793 30593 fix.go:190] guest clock delta is within tolerance: 89.506934ms
I1101 00:09:10.119800 30593 start.go:83] releasing machines lock for "multinode-391061", held for 20.317128044s
I1101 00:09:10.119826 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.120083 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:10.122834 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.123267 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:10.123301 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.123482 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.124067 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.124267 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.124386 30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 00:09:10.124433 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:10.124459 30593 ssh_runner.go:195] Run: cat /version.json
I1101 00:09:10.124497 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:10.127197 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127360 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127632 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:10.127661 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127789 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:10.127807 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:10.127837 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127985 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:10.127991 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:10.128201 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:10.128203 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:10.128392 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:10.128400 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:10.128527 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:10.219062 30593 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
I1101 00:09:10.244630 30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1101 00:09:10.245754 30593 ssh_runner.go:195] Run: systemctl --version
I1101 00:09:10.251311 30593 command_runner.go:130] > systemd 247 (247)
I1101 00:09:10.251350 30593 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I1101 00:09:10.251621 30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1101 00:09:10.256782 30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1101 00:09:10.256835 30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 00:09:10.256887 30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 00:09:10.271406 30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1101 00:09:10.271460 30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 00:09:10.271470 30593 start.go:472] detecting cgroup driver to use...
I1101 00:09:10.271565 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:09:10.288462 30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1101 00:09:10.288546 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1101 00:09:10.298090 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1101 00:09:10.307653 30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1101 00:09:10.307716 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1101 00:09:10.317073 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:09:10.326800 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1101 00:09:10.336055 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:09:10.345573 30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 00:09:10.355553 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1101 00:09:10.365472 30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 00:09:10.373896 30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1101 00:09:10.374055 30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 00:09:10.382414 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:10.484557 30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 00:09:10.503546 30593 start.go:472] detecting cgroup driver to use...
I1101 00:09:10.503677 30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 00:09:10.516143 30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1101 00:09:10.517085 30593 command_runner.go:130] > [Unit]
I1101 00:09:10.517117 30593 command_runner.go:130] > Description=Docker Application Container Engine
I1101 00:09:10.517127 30593 command_runner.go:130] > Documentation=https://docs.docker.com
I1101 00:09:10.517135 30593 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1101 00:09:10.517143 30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1101 00:09:10.517151 30593 command_runner.go:130] > StartLimitBurst=3
I1101 00:09:10.517159 30593 command_runner.go:130] > StartLimitIntervalSec=60
I1101 00:09:10.517169 30593 command_runner.go:130] > [Service]
I1101 00:09:10.517175 30593 command_runner.go:130] > Type=notify
I1101 00:09:10.517185 30593 command_runner.go:130] > Restart=on-failure
I1101 00:09:10.517197 30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1101 00:09:10.517218 30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1101 00:09:10.517247 30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1101 00:09:10.517256 30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1101 00:09:10.517266 30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1101 00:09:10.517276 30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1101 00:09:10.517285 30593 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1101 00:09:10.517306 30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1101 00:09:10.517318 30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1101 00:09:10.517328 30593 command_runner.go:130] > ExecStart=
I1101 00:09:10.517356 30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1101 00:09:10.517369 30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1101 00:09:10.517383 30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1101 00:09:10.517397 30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1101 00:09:10.517408 30593 command_runner.go:130] > LimitNOFILE=infinity
I1101 00:09:10.517415 30593 command_runner.go:130] > LimitNPROC=infinity
I1101 00:09:10.517425 30593 command_runner.go:130] > LimitCORE=infinity
I1101 00:09:10.517433 30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1101 00:09:10.517441 30593 command_runner.go:130] > # Only systemd 226 and above support this version.
I1101 00:09:10.517447 30593 command_runner.go:130] > TasksMax=infinity
I1101 00:09:10.517454 30593 command_runner.go:130] > TimeoutStartSec=0
I1101 00:09:10.517463 30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1101 00:09:10.517469 30593 command_runner.go:130] > Delegate=yes
I1101 00:09:10.517477 30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1101 00:09:10.517488 30593 command_runner.go:130] > KillMode=process
I1101 00:09:10.517502 30593 command_runner.go:130] > [Install]
I1101 00:09:10.517521 30593 command_runner.go:130] > WantedBy=multi-user.target
I1101 00:09:10.517760 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:09:10.537353 30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 00:09:10.559962 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:09:10.572863 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:09:10.585294 30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 00:09:10.613156 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:09:10.626018 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:09:10.642949 30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1101 00:09:10.643493 30593 ssh_runner.go:195] Run: which cri-dockerd
I1101 00:09:10.647034 30593 command_runner.go:130] > /usr/bin/cri-dockerd
I1101 00:09:10.647148 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1101 00:09:10.656096 30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1101 00:09:10.672510 30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 00:09:10.775493 30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 00:09:10.890922 30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
I1101 00:09:10.891096 30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1101 00:09:10.911224 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:11.028462 30593 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 00:09:12.495501 30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.467002879s)
I1101 00:09:12.495587 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:09:12.596857 30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1101 00:09:12.696859 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:09:12.818695 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:12.925882 30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1101 00:09:12.942696 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:13.046788 30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1101 00:09:13.125894 30593 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1101 00:09:13.125989 30593 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1101 00:09:13.131383 30593 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I1101 00:09:13.131401 30593 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I1101 00:09:13.131407 30593 command_runner.go:130] > Device: 16h/22d Inode: 823 Links: 1
I1101 00:09:13.131414 30593 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I1101 00:09:13.131420 30593 command_runner.go:130] > Access: 2023-11-01 00:09:13.012751521 +0000
I1101 00:09:13.131425 30593 command_runner.go:130] > Modify: 2023-11-01 00:09:13.012751521 +0000
I1101 00:09:13.131432 30593 command_runner.go:130] > Change: 2023-11-01 00:09:13.015751521 +0000
I1101 00:09:13.131448 30593 command_runner.go:130] > Birth: -
I1101 00:09:13.131608 30593 start.go:540] Will wait 60s for crictl version
I1101 00:09:13.131663 30593 ssh_runner.go:195] Run: which crictl
I1101 00:09:13.135151 30593 command_runner.go:130] > /usr/bin/crictl
I1101 00:09:13.135210 30593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1101 00:09:13.203365 30593 command_runner.go:130] > Version: 0.1.0
I1101 00:09:13.203385 30593 command_runner.go:130] > RuntimeName: docker
I1101 00:09:13.203397 30593 command_runner.go:130] > RuntimeVersion: 24.0.6
I1101 00:09:13.203407 30593 command_runner.go:130] > RuntimeApiVersion: v1
I1101 00:09:13.203445 30593 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1
I1101 00:09:13.203500 30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 00:09:13.228282 30593 command_runner.go:130] > 24.0.6
I1101 00:09:13.228417 30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 00:09:13.252487 30593 command_runner.go:130] > 24.0.6
I1101 00:09:13.254840 30593 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
I1101 00:09:13.254880 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:13.257487 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:13.257845 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:13.257879 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:13.258035 30593 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1101 00:09:13.261869 30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 00:09:13.272965 30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I1101 00:09:13.273017 30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 00:09:13.291973 30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
I1101 00:09:13.292012 30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
I1101 00:09:13.292018 30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
I1101 00:09:13.292023 30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
I1101 00:09:13.292028 30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1101 00:09:13.292033 30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1101 00:09:13.292039 30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1101 00:09:13.292046 30593 command_runner.go:130] > registry.k8s.io/pause:3.9
I1101 00:09:13.292051 30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1101 00:09:13.292058 30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1101 00:09:13.292659 30593 docker.go:699] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1101 00:09:13.292679 30593 docker.go:629] Images already preloaded, skipping extraction
I1101 00:09:13.292737 30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 00:09:13.311772 30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
I1101 00:09:13.311797 30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
I1101 00:09:13.311806 30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
I1101 00:09:13.311814 30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
I1101 00:09:13.311821 30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1101 00:09:13.311826 30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1101 00:09:13.311831 30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1101 00:09:13.311836 30593 command_runner.go:130] > registry.k8s.io/pause:3.9
I1101 00:09:13.311841 30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1101 00:09:13.311857 30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1101 00:09:13.311882 30593 docker.go:699] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1101 00:09:13.311900 30593 cache_images.go:84] Images are preloaded, skipping loading
I1101 00:09:13.311963 30593 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1101 00:09:13.336389 30593 command_runner.go:130] > cgroupfs
I1101 00:09:13.336458 30593 cni.go:84] Creating CNI manager for ""
I1101 00:09:13.336469 30593 cni.go:136] 2 nodes found, recommending kindnet
I1101 00:09:13.336493 30593 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 00:09:13.336521 30593 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-391061 NodeName:multinode-391061 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1101 00:09:13.336694 30593 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.43
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-391061"
kubeletExtraArgs:
node-ip: 192.168.39.43
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 00:09:13.336788 30593 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-391061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
[Install]
config:
{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 00:09:13.336851 30593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
I1101 00:09:13.346367 30593 command_runner.go:130] > kubeadm
I1101 00:09:13.346390 30593 command_runner.go:130] > kubectl
I1101 00:09:13.346396 30593 command_runner.go:130] > kubelet
I1101 00:09:13.346518 30593 binaries.go:44] Found k8s binaries, skipping transfer
I1101 00:09:13.346594 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 00:09:13.355275 30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I1101 00:09:13.370971 30593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 00:09:13.387036 30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I1101 00:09:13.402440 30593 ssh_runner.go:195] Run: grep 192.168.39.43 control-plane.minikube.internal$ /etc/hosts
I1101 00:09:13.406022 30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 00:09:13.417070 30593 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061 for IP: 192.168.39.43
I1101 00:09:13.417103 30593 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:13.417247 30593 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
I1101 00:09:13.417296 30593 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
I1101 00:09:13.417388 30593 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key
I1101 00:09:13.417450 30593 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key.7e75dda5
I1101 00:09:13.417508 30593 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key
I1101 00:09:13.417523 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1101 00:09:13.417544 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1101 00:09:13.417575 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1101 00:09:13.417593 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1101 00:09:13.417603 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1101 00:09:13.417615 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1101 00:09:13.417625 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1101 00:09:13.417636 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1101 00:09:13.417690 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
W1101 00:09:13.417720 30593 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
I1101 00:09:13.417729 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
I1101 00:09:13.417752 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
I1101 00:09:13.417776 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
I1101 00:09:13.417804 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
I1101 00:09:13.417847 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
I1101 00:09:13.417870 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem -> /usr/share/ca-certificates/14463.pem
I1101 00:09:13.417882 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /usr/share/ca-certificates/144632.pem
I1101 00:09:13.417894 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.418474 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 00:09:13.440131 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 00:09:13.461354 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 00:09:13.484158 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 00:09:13.507642 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 00:09:13.530560 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1101 00:09:13.552173 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 00:09:13.572803 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 00:09:13.594200 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
I1101 00:09:13.614546 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
I1101 00:09:13.635287 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 00:09:13.655804 30593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I1101 00:09:13.671160 30593 ssh_runner.go:195] Run: openssl version
I1101 00:09:13.676595 30593 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I1101 00:09:13.676661 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
I1101 00:09:13.687719 30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
I1101 00:09:13.692306 30593 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
I1101 00:09:13.692356 30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
I1101 00:09:13.692398 30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
I1101 00:09:13.697913 30593 command_runner.go:130] > 51391683
I1101 00:09:13.698156 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
I1101 00:09:13.708708 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
I1101 00:09:13.718932 30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
I1101 00:09:13.723625 30593 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
I1101 00:09:13.723665 30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
I1101 00:09:13.723717 30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
I1101 00:09:13.729381 30593 command_runner.go:130] > 3ec20f2e
I1101 00:09:13.729472 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
I1101 00:09:13.739928 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 00:09:13.749888 30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.754135 30593 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.754186 30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.754224 30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.759372 30593 command_runner.go:130] > b5213941
I1101 00:09:13.759586 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 00:09:13.770878 30593 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1101 00:09:13.774944 30593 command_runner.go:130] > ca.crt
I1101 00:09:13.774961 30593 command_runner.go:130] > ca.key
I1101 00:09:13.774966 30593 command_runner.go:130] > healthcheck-client.crt
I1101 00:09:13.774977 30593 command_runner.go:130] > healthcheck-client.key
I1101 00:09:13.774981 30593 command_runner.go:130] > peer.crt
I1101 00:09:13.774985 30593 command_runner.go:130] > peer.key
I1101 00:09:13.774988 30593 command_runner.go:130] > server.crt
I1101 00:09:13.774993 30593 command_runner.go:130] > server.key
I1101 00:09:13.775195 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1101 00:09:13.780693 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.781005 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1101 00:09:13.786438 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.786773 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1101 00:09:13.792247 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.792305 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1101 00:09:13.797510 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.797845 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1101 00:09:13.803206 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.803273 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1101 00:09:13.808620 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.808816 30593 kubeadm.go:404] StartCluster: {Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kube
virt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1101 00:09:13.808974 30593 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 00:09:13.826906 30593 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 00:09:13.836480 30593 command_runner.go:130] > /var/lib/kubelet/config.yaml
I1101 00:09:13.836509 30593 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I1101 00:09:13.836518 30593 command_runner.go:130] > /var/lib/minikube/etcd:
I1101 00:09:13.836524 30593 command_runner.go:130] > member
I1101 00:09:13.836597 30593 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I1101 00:09:13.836612 30593 kubeadm.go:636] restartCluster start
I1101 00:09:13.836669 30593 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1101 00:09:13.845747 30593 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1101 00:09:13.846165 30593 kubeconfig.go:135] verify returned: extract IP: "multinode-391061" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:13.846289 30593 kubeconfig.go:146] "multinode-391061" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
I1101 00:09:13.846620 30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:13.847028 30593 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:13.847260 30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 00:09:13.847933 30593 cert_rotation.go:137] Starting client certificate rotation controller
I1101 00:09:13.848016 30593 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1101 00:09:13.857014 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:13.857066 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:13.868306 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:13.868326 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:13.868365 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:13.879425 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:14.380169 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:14.380271 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:14.393563 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:14.879961 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:14.880030 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:14.891500 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:15.380030 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:15.380116 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:15.394849 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:15.880377 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:15.880462 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:15.892276 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:16.379827 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:16.379933 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:16.391756 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:16.880389 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:16.880484 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:16.892186 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:17.379748 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:17.379838 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:17.391913 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:17.880537 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:17.880630 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:17.893349 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:18.379933 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:18.380022 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:18.391643 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:18.880268 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:18.880355 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:18.892132 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:19.379676 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:19.379760 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:19.391501 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:19.880377 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:19.880494 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:19.892270 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:20.379875 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:20.379968 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:20.391559 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:20.880250 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:20.880355 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:20.891729 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:21.380337 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:21.380407 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:21.391986 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:21.879571 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:21.879681 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:21.891291 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:22.379884 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:22.379978 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:22.391825 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:22.880476 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:22.880570 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:22.892224 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:23.379724 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:23.379835 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:23.391883 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:23.857628 30593 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I1101 00:09:23.857661 30593 kubeadm.go:1128] stopping kube-system containers ...
I1101 00:09:23.857758 30593 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 00:09:23.879399 30593 command_runner.go:130] > c8ec107c7b83
I1101 00:09:23.879423 30593 command_runner.go:130] > 8a050fec9e56
I1101 00:09:23.879444 30593 command_runner.go:130] > 0922f8b627ba
I1101 00:09:23.879448 30593 command_runner.go:130] > 7e5dd13abba8
I1101 00:09:23.879453 30593 command_runner.go:130] > 717d368b8c2a
I1101 00:09:23.879456 30593 command_runner.go:130] > beeaf0ac020b
I1101 00:09:23.879460 30593 command_runner.go:130] > d52c65ebca75
I1101 00:09:23.879464 30593 command_runner.go:130] > 5c355a51915e
I1101 00:09:23.879467 30593 command_runner.go:130] > 6e72da581d8b
I1101 00:09:23.879471 30593 command_runner.go:130] > 37d9dd0022b9
I1101 00:09:23.879475 30593 command_runner.go:130] > c5ea3d84d06f
I1101 00:09:23.879479 30593 command_runner.go:130] > 32294fac02b3
I1101 00:09:23.879482 30593 command_runner.go:130] > a49a86a47d7c
I1101 00:09:23.879486 30593 command_runner.go:130] > 36d5f0bd5cf2
I1101 00:09:23.879494 30593 command_runner.go:130] > 92b70c8321ee
I1101 00:09:23.879498 30593 command_runner.go:130] > 9f5176fde232
I1101 00:09:23.879502 30593 command_runner.go:130] > f576715f1f47
I1101 00:09:23.879506 30593 command_runner.go:130] > 44a2cc98732a
I1101 00:09:23.879509 30593 command_runner.go:130] > 5a2e590156b6
I1101 00:09:23.879518 30593 command_runner.go:130] > feea3a57d77e
I1101 00:09:23.879525 30593 command_runner.go:130] > 7ad930b36263
I1101 00:09:23.879528 30593 command_runner.go:130] > b110676d9563
I1101 00:09:23.879533 30593 command_runner.go:130] > 8659d1168087
I1101 00:09:23.879540 30593 command_runner.go:130] > 7f78495183a7
I1101 00:09:23.879543 30593 command_runner.go:130] > 21b2a7338538
I1101 00:09:23.879547 30593 command_runner.go:130] > 2b739c443c07
I1101 00:09:23.879553 30593 command_runner.go:130] > f8c33525e5e4
I1101 00:09:23.879557 30593 command_runner.go:130] > b6d83949182f
I1101 00:09:23.879561 30593 command_runner.go:130] > 8dc7f1a0f0cf
I1101 00:09:23.879565 30593 command_runner.go:130] > d114ab0f9727
I1101 00:09:23.879569 30593 command_runner.go:130] > 88e660774880
I1101 00:09:23.880506 30593 docker.go:470] Stopping containers: [c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880]
I1101 00:09:23.880594 30593 ssh_runner.go:195] Run: docker stop c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880
I1101 00:09:23.906747 30593 command_runner.go:130] > c8ec107c7b83
I1101 00:09:23.906784 30593 command_runner.go:130] > 8a050fec9e56
I1101 00:09:23.906790 30593 command_runner.go:130] > 0922f8b627ba
I1101 00:09:23.906941 30593 command_runner.go:130] > 7e5dd13abba8
I1101 00:09:23.907074 30593 command_runner.go:130] > 717d368b8c2a
I1101 00:09:23.907086 30593 command_runner.go:130] > beeaf0ac020b
I1101 00:09:23.907092 30593 command_runner.go:130] > d52c65ebca75
I1101 00:09:23.907110 30593 command_runner.go:130] > 5c355a51915e
I1101 00:09:23.907116 30593 command_runner.go:130] > 6e72da581d8b
I1101 00:09:23.907123 30593 command_runner.go:130] > 37d9dd0022b9
I1101 00:09:23.907130 30593 command_runner.go:130] > c5ea3d84d06f
I1101 00:09:23.907139 30593 command_runner.go:130] > 32294fac02b3
I1101 00:09:23.907146 30593 command_runner.go:130] > a49a86a47d7c
I1101 00:09:23.907157 30593 command_runner.go:130] > 36d5f0bd5cf2
I1101 00:09:23.907168 30593 command_runner.go:130] > 92b70c8321ee
I1101 00:09:23.907176 30593 command_runner.go:130] > 9f5176fde232
I1101 00:09:23.907188 30593 command_runner.go:130] > f576715f1f47
I1101 00:09:23.907198 30593 command_runner.go:130] > 44a2cc98732a
I1101 00:09:23.907202 30593 command_runner.go:130] > 5a2e590156b6
I1101 00:09:23.907207 30593 command_runner.go:130] > feea3a57d77e
I1101 00:09:23.907213 30593 command_runner.go:130] > 7ad930b36263
I1101 00:09:23.907220 30593 command_runner.go:130] > b110676d9563
I1101 00:09:23.907227 30593 command_runner.go:130] > 8659d1168087
I1101 00:09:23.907238 30593 command_runner.go:130] > 7f78495183a7
I1101 00:09:23.907244 30593 command_runner.go:130] > 21b2a7338538
I1101 00:09:23.907254 30593 command_runner.go:130] > 2b739c443c07
I1101 00:09:23.907263 30593 command_runner.go:130] > f8c33525e5e4
I1101 00:09:23.907270 30593 command_runner.go:130] > b6d83949182f
I1101 00:09:23.907278 30593 command_runner.go:130] > 8dc7f1a0f0cf
I1101 00:09:23.907284 30593 command_runner.go:130] > d114ab0f9727
I1101 00:09:23.907288 30593 command_runner.go:130] > 88e660774880
I1101 00:09:23.908329 30593 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1101 00:09:23.924405 30593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 00:09:23.933413 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I1101 00:09:23.933460 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I1101 00:09:23.933474 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I1101 00:09:23.933508 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 00:09:23.933573 30593 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 00:09:23.933632 30593 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 00:09:23.942681 30593 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1101 00:09:23.942716 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:24.061200 30593 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 00:09:24.061740 30593 command_runner.go:130] > [certs] Using existing ca certificate authority
I1101 00:09:24.062273 30593 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I1101 00:09:24.062864 30593 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1101 00:09:24.063543 30593 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I1101 00:09:24.064483 30593 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I1101 00:09:24.065146 30593 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I1101 00:09:24.065723 30593 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I1101 00:09:24.066240 30593 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I1101 00:09:24.066826 30593 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1101 00:09:24.067296 30593 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I1101 00:09:24.067896 30593 command_runner.go:130] > [certs] Using the existing "sa" key
I1101 00:09:24.069200 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:24.889031 30593 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 00:09:24.889057 30593 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 00:09:24.889063 30593 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 00:09:24.889069 30593 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 00:09:24.889075 30593 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 00:09:24.889099 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:25.068922 30593 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 00:09:25.068953 30593 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 00:09:25.068959 30593 command_runner.go:130] > [kubelet-start] Starting the kubelet
I1101 00:09:25.069343 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:25.134897 30593 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 00:09:25.134925 30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 00:09:25.141279 30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 00:09:25.148755 30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 00:09:25.153988 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:25.224920 30593 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 00:09:25.228266 30593 api_server.go:52] waiting for apiserver process to appear ...
I1101 00:09:25.228336 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:25.246286 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:25.761474 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:26.261798 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:26.761515 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:27.261570 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:27.761008 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:27.804720 30593 command_runner.go:130] > 1704
I1101 00:09:27.806000 30593 api_server.go:72] duration metric: took 2.577736282s to wait for apiserver process to appear ...
I1101 00:09:27.806022 30593 api_server.go:88] waiting for apiserver healthz status ...
I1101 00:09:27.806041 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:27.806649 30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
I1101 00:09:27.806703 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:27.807202 30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
I1101 00:09:28.307960 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:31.401471 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1101 00:09:31.401504 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1101 00:09:31.401515 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:31.478349 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1101 00:09:31.478386 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1101 00:09:31.807657 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:31.816386 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1101 00:09:31.816421 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1101 00:09:32.308084 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:32.313351 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1101 00:09:32.313393 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1101 00:09:32.807687 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:32.814924 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
ok
I1101 00:09:32.815019 30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
I1101 00:09:32.815029 30593 round_trippers.go:469] Request Headers:
I1101 00:09:32.815039 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:32.815049 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:32.823839 30593 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I1101 00:09:32.823862 30593 round_trippers.go:577] Response Headers:
I1101 00:09:32.823873 30593 round_trippers.go:580] Audit-Id: 654a1cb8-a85b-41cb-aea3-21ea6bc79004
I1101 00:09:32.823885 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:32.823891 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:32.823898 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:32.823905 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:32.823913 30593 round_trippers.go:580] Content-Length: 264
I1101 00:09:32.823921 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:32 GMT
I1101 00:09:32.823947 30593 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
}
I1101 00:09:32.824032 30593 api_server.go:141] control plane version: v1.28.3
I1101 00:09:32.824050 30593 api_server.go:131] duration metric: took 5.018019595s to wait for apiserver health ...
I1101 00:09:32.824061 30593 cni.go:84] Creating CNI manager for ""
I1101 00:09:32.824070 30593 cni.go:136] 2 nodes found, recommending kindnet
I1101 00:09:32.826169 30593 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1101 00:09:32.827914 30593 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1101 00:09:32.841919 30593 command_runner.go:130] > File: /opt/cni/bin/portmap
I1101 00:09:32.841942 30593 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I1101 00:09:32.841948 30593 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I1101 00:09:32.841955 30593 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I1101 00:09:32.841960 30593 command_runner.go:130] > Access: 2023-11-01 00:09:01.939751521 +0000
I1101 00:09:32.841969 30593 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
I1101 00:09:32.841974 30593 command_runner.go:130] > Change: 2023-11-01 00:09:00.154751521 +0000
I1101 00:09:32.841979 30593 command_runner.go:130] > Birth: -
I1101 00:09:32.843041 30593 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
I1101 00:09:32.843061 30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I1101 00:09:32.868639 30593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1101 00:09:34.233741 30593 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I1101 00:09:34.264714 30593 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I1101 00:09:34.269029 30593 command_runner.go:130] > serviceaccount/kindnet unchanged
I1101 00:09:34.306476 30593 command_runner.go:130] > daemonset.apps/kindnet configured
I1101 00:09:34.313598 30593 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.44492846s)
I1101 00:09:34.313628 30593 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 00:09:34.313739 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:34.313753 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.313764 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.313774 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.328832 30593 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
I1101 00:09:34.328855 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.328863 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.328871 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.328944 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.328962 30593 round_trippers.go:580] Audit-Id: 9a80f099-79a4-48ce-bc32-9266f1c0dc9f
I1101 00:09:34.328971 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.328985 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.330618 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
I1101 00:09:34.334579 30593 system_pods.go:59] 12 kube-system pods found
I1101 00:09:34.334612 30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 00:09:34.334627 30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1101 00:09:34.334633 30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1101 00:09:34.334638 30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
I1101 00:09:34.334642 30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
I1101 00:09:34.334649 30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1101 00:09:34.334659 30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1101 00:09:34.334666 30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
I1101 00:09:34.334670 30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
I1101 00:09:34.334674 30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
I1101 00:09:34.334679 30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1101 00:09:34.334685 30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 00:09:34.334691 30593 system_pods.go:74] duration metric: took 21.056413ms to wait for pod list to return data ...
I1101 00:09:34.334704 30593 node_conditions.go:102] verifying NodePressure condition ...
I1101 00:09:34.334757 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
I1101 00:09:34.334764 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.334771 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.334777 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.340145 30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1101 00:09:34.340163 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.340169 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.340175 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.340180 30593 round_trippers.go:580] Audit-Id: 1531eb5d-604e-4c94-96b1-59616ac61bc1
I1101 00:09:34.340185 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.340189 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.340199 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.340500 30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9590 chars]
I1101 00:09:34.341106 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:34.341127 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:34.341135 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:34.341139 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:34.341143 30593 node_conditions.go:105] duration metric: took 6.435475ms to run NodePressure ...
I1101 00:09:34.341158 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:34.596643 30593 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I1101 00:09:34.664781 30593 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I1101 00:09:34.667106 30593 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I1101 00:09:34.667212 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
I1101 00:09:34.667221 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.667228 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.667234 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.673886 30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I1101 00:09:34.673905 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.673912 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.673918 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.673923 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.673936 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.673941 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.673946 30593 round_trippers.go:580] Audit-Id: 7dc67d14-eb2e-46d1-aa78-54d52af1af34
I1101 00:09:34.675336 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
I1101 00:09:34.676627 30593 kubeadm.go:787] kubelet initialised
I1101 00:09:34.676644 30593 kubeadm.go:788] duration metric: took 9.518378ms waiting for restarted kubelet to initialise ...
I1101 00:09:34.676651 30593 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:34.676705 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:34.676713 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.676720 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.676728 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.683293 30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I1101 00:09:34.683308 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.683315 30593 round_trippers.go:580] Audit-Id: b0192f99-985e-4aae-927b-c47d95fe8014
I1101 00:09:34.683321 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.683327 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.683332 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.683338 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.683350 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.685550 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
I1101 00:09:34.688329 30593 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.688397 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:34.688408 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.688416 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.688421 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.698455 30593 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I1101 00:09:34.699740 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.699755 30593 round_trippers.go:580] Audit-Id: eb7d9633-7fab-456d-a9f4-795f402a1e5a
I1101 00:09:34.699764 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.699774 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.699785 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.699794 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.699803 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.699985 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:34.700490 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.700507 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.700517 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.700526 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.713644 30593 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I1101 00:09:34.713666 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.713679 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.713686 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.713694 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.713702 30593 round_trippers.go:580] Audit-Id: ee2f8b85-6ebc-4ce5-b02d-f9b38983f319
I1101 00:09:34.713710 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.713722 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.713963 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.714314 30593 pod_ready.go:97] node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.714332 30593 pod_ready.go:81] duration metric: took 25.984465ms waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.714343 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.714355 30593 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.714451 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
I1101 00:09:34.714465 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.714476 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.714486 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.716800 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.716818 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.716827 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.716838 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.716846 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.716854 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.716866 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.716879 30593 round_trippers.go:580] Audit-Id: 0183d545-7a83-4bf3-bb19-280d54d90e72
I1101 00:09:34.717288 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
I1101 00:09:34.717688 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.717702 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.717708 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.717715 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.719608 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:34.719624 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.719632 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.719640 30593 round_trippers.go:580] Audit-Id: cc656017-62ca-46cc-93aa-6f56e0bacf57
I1101 00:09:34.719647 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.719655 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.719663 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.719673 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.719831 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.720155 30593 pod_ready.go:97] node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.720173 30593 pod_ready.go:81] duration metric: took 5.809883ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.720181 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.720222 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.720281 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:34.720291 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.720302 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.720316 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.727693 30593 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I1101 00:09:34.727724 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.727735 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.727746 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.727757 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.727768 30593 round_trippers.go:580] Audit-Id: f429dcbd-b1c6-47e9-b094-3b51b74fd598
I1101 00:09:34.727779 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.727790 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.727953 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:34.728461 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.728479 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.728490 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.728500 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.730599 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.730613 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.730619 30593 round_trippers.go:580] Audit-Id: 0de3f8aa-089c-4434-b8d3-d71e99713bfd
I1101 00:09:34.730624 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.730632 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.730644 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.730660 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.730670 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.730850 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.731213 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.731234 30593 pod_ready.go:81] duration metric: took 11.0013ms waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.731247 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.731266 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.731321 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
I1101 00:09:34.731332 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.731342 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.731350 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.735460 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:34.735475 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.735481 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.735488 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.735501 30593 round_trippers.go:580] Audit-Id: 2bd7494f-9968-4fd2-aca0-bb70496933d6
I1101 00:09:34.735518 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.735525 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.735540 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.735848 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1178","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1101 00:09:34.736287 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.736300 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.736307 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.736315 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.738460 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.738480 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.738490 30593 round_trippers.go:580] Audit-Id: b9555108-2183-46ca-b82f-b9cd6213e770
I1101 00:09:34.738511 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.738524 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.738532 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.738547 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.738555 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.738690 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.739057 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.739086 30593 pod_ready.go:81] duration metric: took 7.809638ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.739103 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.739113 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.914034 30593 request.go:629] Waited for 174.835524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
I1101 00:09:34.914109 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
I1101 00:09:34.914114 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.914121 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.914131 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.916919 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.916946 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.916955 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.916964 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.916972 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.916983 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.916990 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.917003 30593 round_trippers.go:580] Audit-Id: 7b74a314-8cec-4d22-9be3-8af74ba926c4
I1101 00:09:34.917222 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
I1101 00:09:35.113972 30593 request.go:629] Waited for 196.314968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:35.114094 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:35.114106 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.114117 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.114128 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.116700 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:35.116727 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.116736 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.116744 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.116752 30593 round_trippers.go:580] Audit-Id: 520e1602-a5d2-496e-9336-3d05ae9bf431
I1101 00:09:35.116760 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.116769 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.116778 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.116880 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:35.117203 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:35.117220 30593 pod_ready.go:81] duration metric: took 378.09771ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
E1101 00:09:35.117234 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:35.117249 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:35.314720 30593 request.go:629] Waited for 197.37685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:35.314784 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:35.314790 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.314797 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.314806 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.317474 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:35.317495 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.317502 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.317508 30593 round_trippers.go:580] Audit-Id: 9af5c93f-eeb8-4bf5-91cf-0004ad594526
I1101 00:09:35.317513 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.317526 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.317532 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.317537 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.317656 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
I1101 00:09:35.514541 30593 request.go:629] Waited for 196.422301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:35.514605 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:35.514610 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.514620 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.514626 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.516964 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:35.516981 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.516987 30593 round_trippers.go:580] Audit-Id: f60ca5be-eff7-45b6-b4ef-25a4244f2ac8
I1101 00:09:35.516992 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.516999 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.517007 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.517016 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.517024 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.517144 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
I1101 00:09:35.517386 30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:35.517399 30593 pod_ready.go:81] duration metric: took 400.144025ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:35.517407 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
I1101 00:09:35.713801 30593 request.go:629] Waited for 196.321571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:35.713897 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:35.713902 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.713912 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.713919 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.718570 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:35.718593 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.718599 30593 round_trippers.go:580] Audit-Id: a80b7d1f-2804-4453-9d76-e2f5feeecd8b
I1101 00:09:35.718604 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.718609 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.718614 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.718619 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.718624 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.719017 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1101 00:09:35.914812 30593 request.go:629] Waited for 195.361033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:35.914878 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:35.914884 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.914892 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.914905 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.918630 30593 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I1101 00:09:35.918651 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.918658 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.918669 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.918675 30593 round_trippers.go:580] Content-Length: 210
I1101 00:09:35.918680 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.918685 30593 round_trippers.go:580] Audit-Id: 8559bcdf-7ea2-4533-82a7-71b9489af62e
I1101 00:09:35.918693 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.918698 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.918716 30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
I1101 00:09:35.918899 30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:35.918915 30593 pod_ready.go:81] duration metric: took 401.503391ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
E1101 00:09:35.918928 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:35.918938 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:36.114381 30593 request.go:629] Waited for 195.370649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:36.114441 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:36.114446 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.114453 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.114459 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.117280 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:36.117299 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.117305 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.117310 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.117316 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.117324 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.117332 30593 round_trippers.go:580] Audit-Id: 1a904aba-8eb8-4b24-84bc-bed0f6168940
I1101 00:09:36.117345 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.117488 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1101 00:09:36.314311 30593 request.go:629] Waited for 196.435913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.314416 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.314424 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.314432 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.314438 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.317156 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:36.317180 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.317187 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.317193 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.317198 30593 round_trippers.go:580] Audit-Id: 438f8f57-c6d3-4b09-82e1-c9c57e8542d5
I1101 00:09:36.317207 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.317226 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.317232 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.317370 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:36.317685 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:36.317702 30593 pod_ready.go:81] duration metric: took 398.74998ms waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:36.317710 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:36.317717 30593 pod_ready.go:38] duration metric: took 1.641059341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:36.317736 30593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 00:09:36.328581 30593 command_runner.go:130] > -16
I1101 00:09:36.329017 30593 ops.go:34] apiserver oom_adj: -16
I1101 00:09:36.329031 30593 kubeadm.go:640] restartCluster took 22.492412523s
I1101 00:09:36.329039 30593 kubeadm.go:406] StartCluster complete in 22.520229717s
I1101 00:09:36.329066 30593 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:36.329145 30593 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:36.329734 30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:36.329976 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1101 00:09:36.330139 30593 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I1101 00:09:36.330259 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:09:36.332516 30593 out.go:177] * Enabled addons:
I1101 00:09:36.330334 30593 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:36.334140 30593 addons.go:502] enable addons completed in 4.002956ms: enabled=[]
I1101 00:09:36.332878 30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 00:09:36.334423 30593 round_trippers.go:463] GET https://192.168.39.43:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I1101 00:09:36.334436 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.334446 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.334454 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.337955 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:36.337986 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.337996 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.338004 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.338012 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.338027 30593 round_trippers.go:580] Content-Length: 292
I1101 00:09:36.338038 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.338050 30593 round_trippers.go:580] Audit-Id: 9324051b-7b18-4bb3-a5fe-00967444602f
I1101 00:09:36.338061 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.338088 30593 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a6ee33a-4e79-49d5-be0e-4e19b76eb2c6","resourceVersion":"1206","creationTimestamp":"2023-11-01T00:02:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I1101 00:09:36.338210 30593 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-391061" context rescaled to 1 replicas
I1101 00:09:36.338240 30593 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1101 00:09:36.340479 30593 out.go:177] * Verifying Kubernetes components...
I1101 00:09:36.342243 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 00:09:36.464070 30593 command_runner.go:130] > apiVersion: v1
I1101 00:09:36.464088 30593 command_runner.go:130] > data:
I1101 00:09:36.464092 30593 command_runner.go:130] > Corefile: |
I1101 00:09:36.464096 30593 command_runner.go:130] > .:53 {
I1101 00:09:36.464099 30593 command_runner.go:130] > log
I1101 00:09:36.464104 30593 command_runner.go:130] > errors
I1101 00:09:36.464108 30593 command_runner.go:130] > health {
I1101 00:09:36.464112 30593 command_runner.go:130] > lameduck 5s
I1101 00:09:36.464116 30593 command_runner.go:130] > }
I1101 00:09:36.464124 30593 command_runner.go:130] > ready
I1101 00:09:36.464129 30593 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I1101 00:09:36.464134 30593 command_runner.go:130] > pods insecure
I1101 00:09:36.464139 30593 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I1101 00:09:36.464143 30593 command_runner.go:130] > ttl 30
I1101 00:09:36.464147 30593 command_runner.go:130] > }
I1101 00:09:36.464151 30593 command_runner.go:130] > prometheus :9153
I1101 00:09:36.464154 30593 command_runner.go:130] > hosts {
I1101 00:09:36.464159 30593 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I1101 00:09:36.464163 30593 command_runner.go:130] > fallthrough
I1101 00:09:36.464167 30593 command_runner.go:130] > }
I1101 00:09:36.464175 30593 command_runner.go:130] > forward . /etc/resolv.conf {
I1101 00:09:36.464180 30593 command_runner.go:130] > max_concurrent 1000
I1101 00:09:36.464184 30593 command_runner.go:130] > }
I1101 00:09:36.464188 30593 command_runner.go:130] > cache 30
I1101 00:09:36.464193 30593 command_runner.go:130] > loop
I1101 00:09:36.464198 30593 command_runner.go:130] > reload
I1101 00:09:36.464202 30593 command_runner.go:130] > loadbalance
I1101 00:09:36.464217 30593 command_runner.go:130] > }
I1101 00:09:36.464224 30593 command_runner.go:130] > kind: ConfigMap
I1101 00:09:36.464228 30593 command_runner.go:130] > metadata:
I1101 00:09:36.464233 30593 command_runner.go:130] > creationTimestamp: "2023-11-01T00:02:20Z"
I1101 00:09:36.464237 30593 command_runner.go:130] > name: coredns
I1101 00:09:36.464242 30593 command_runner.go:130] > namespace: kube-system
I1101 00:09:36.464246 30593 command_runner.go:130] > resourceVersion: "404"
I1101 00:09:36.464251 30593 command_runner.go:130] > uid: 9916bcab-f9a6-4b1c-a0a4-a33e2e2f738c
I1101 00:09:36.466580 30593 node_ready.go:35] waiting up to 6m0s for node "multinode-391061" to be "Ready" ...
I1101 00:09:36.466667 30593 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1101 00:09:36.513888 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.513918 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.513926 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.513933 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.516967 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:36.516991 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.517002 30593 round_trippers.go:580] Audit-Id: 4d84eb47-da1a-4fd0-96d7-b23c142dcf7c
I1101 00:09:36.517010 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.517018 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.517030 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.517038 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.517064 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.517425 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:36.714232 30593 request.go:629] Waited for 196.4313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.714301 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.714308 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.714319 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.714329 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.716978 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:36.716999 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.717006 30593 round_trippers.go:580] Audit-Id: 043fbdbd-3263-4587-9070-be445407c188
I1101 00:09:36.717012 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.717017 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.717022 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.717027 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.717035 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.717202 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:37.218413 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:37.218434 30593 round_trippers.go:469] Request Headers:
I1101 00:09:37.218447 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:37.218453 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:37.222719 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:37.222748 30593 round_trippers.go:577] Response Headers:
I1101 00:09:37.222759 30593 round_trippers.go:580] Audit-Id: 917dad8e-af16-42b6-88ae-5dcab424bb1e
I1101 00:09:37.222768 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:37.222778 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:37.222790 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:37.222802 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:37.222813 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:37 GMT
I1101 00:09:37.223475 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:37.718082 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:37.718126 30593 round_trippers.go:469] Request Headers:
I1101 00:09:37.718135 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:37.718141 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:37.721049 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:37.721077 30593 round_trippers.go:577] Response Headers:
I1101 00:09:37.721088 30593 round_trippers.go:580] Audit-Id: 06dcc7c1-bdd2-4e9f-870d-80146268aafa
I1101 00:09:37.721101 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:37.721121 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:37.721130 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:37.721139 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:37.721148 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:37 GMT
I1101 00:09:37.721272 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:38.218868 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.218893 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.218903 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.218912 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.222059 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:38.222083 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.222105 30593 round_trippers.go:580] Audit-Id: ad14bc98-1add-4a13-8ab1-495ec6575c6e
I1101 00:09:38.222111 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.222116 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.222121 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.222126 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.222131 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.222638 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:38.718331 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.718356 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.718364 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.718370 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.721280 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.721307 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.721314 30593 round_trippers.go:580] Audit-Id: 32a342cc-ec48-43cc-b0f0-efe6838ba34f
I1101 00:09:38.721319 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.721324 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.721329 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.721334 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.721339 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.721695 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:38.722003 30593 node_ready.go:49] node "multinode-391061" has status "Ready":"True"
I1101 00:09:38.722018 30593 node_ready.go:38] duration metric: took 2.255410222s waiting for node "multinode-391061" to be "Ready" ...
I1101 00:09:38.722030 30593 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:38.722093 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:38.722102 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.722113 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.722121 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.726178 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:38.726200 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.726211 30593 round_trippers.go:580] Audit-Id: d4651bc2-6bb9-4745-9c25-8f2b530c877c
I1101 00:09:38.726220 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.726227 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.726236 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.726244 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.726253 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.727979 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84372 chars]
I1101 00:09:38.731666 30593 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
I1101 00:09:38.731777 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:38.731788 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.731797 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.731804 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.734353 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.734368 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.734375 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.734380 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.734386 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.734391 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.734396 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.734401 30593 round_trippers.go:580] Audit-Id: f0f6d35c-893f-4b34-bb39-154e16bedbe1
I1101 00:09:38.734672 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:38.735183 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.735200 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.735208 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.735214 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.737368 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.737382 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.737388 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.737393 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.737398 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.737405 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.737418 30593 round_trippers.go:580] Audit-Id: f978b19f-d984-48d1-b95c-0f850f106969
I1101 00:09:38.737423 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.737700 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:38.738062 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:38.738078 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.738086 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.738092 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.740363 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.740379 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.740385 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.740390 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.740395 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.740408 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.740418 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.740423 30593 round_trippers.go:580] Audit-Id: c33f3cc3-4753-4832-a887-2f2bce060625
I1101 00:09:38.740727 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:38.741200 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.741213 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.741220 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.741226 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.743369 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.743385 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.743392 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.743397 30593 round_trippers.go:580] Audit-Id: ccc0a48d-0d10-468a-a49f-71ad3ebd3363
I1101 00:09:38.743402 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.743407 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.743414 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.743419 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.743797 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:39.244680 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:39.244705 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.244713 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.244719 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.249913 30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1101 00:09:39.249935 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.249943 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.249948 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.249954 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.249959 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.249964 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.249971 30593 round_trippers.go:580] Audit-Id: 12d94c73-c75e-46e9-871a-9b74acd630d6
I1101 00:09:39.250237 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:39.250731 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:39.250745 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.250754 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.250760 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.253732 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:39.253752 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.253761 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.253770 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.253778 30593 round_trippers.go:580] Audit-Id: 2a48db27-174b-4246-a989-ca7f61b115f9
I1101 00:09:39.253787 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.253793 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.253798 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.254037 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:39.744690 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:39.744715 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.744724 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.744729 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.748026 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:39.748050 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.748060 30593 round_trippers.go:580] Audit-Id: d31dc218-4603-4f82-a559-2e3697ff06e2
I1101 00:09:39.748072 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.748080 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.748087 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.748098 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.748105 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.748732 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:39.749181 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:39.749196 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.749206 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.749215 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.751958 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:39.751980 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.751989 30593 round_trippers.go:580] Audit-Id: b460f490-de79-4762-b30a-6cdd07942ced
I1101 00:09:39.751997 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.752005 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.752015 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.752021 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.752029 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.752310 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:40.244413 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:40.244438 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.244446 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.244452 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.248489 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:40.248512 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.248521 30593 round_trippers.go:580] Audit-Id: ccff4954-c9ff-4a7f-9536-aa2b767dc311
I1101 00:09:40.248528 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.248533 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.248538 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.248544 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.248549 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.248729 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:40.249180 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:40.249194 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.249201 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.249209 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.252171 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:40.252188 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.252194 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.252199 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.252203 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.252208 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.252213 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.252218 30593 round_trippers.go:580] Audit-Id: ca95e9f6-880f-4555-aa29-16a66b7bf628
I1101 00:09:40.252484 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:40.745314 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:40.745341 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.745350 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.745357 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.747878 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:40.747895 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.747902 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.747910 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.747924 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.747932 30593 round_trippers.go:580] Audit-Id: b88089ad-e6cf-4b38-b7fb-da565b4e5c79
I1101 00:09:40.747940 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.747951 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.748125 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:40.748587 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:40.748601 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.748611 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.748617 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.750689 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:40.750703 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.750710 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.750721 30593 round_trippers.go:580] Audit-Id: 3a208361-9be9-4a15-8f86-f26ff624d9b3
I1101 00:09:40.750729 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.750736 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.750744 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.750755 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.750912 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:40.751208 30593 pod_ready.go:102] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"False"
I1101 00:09:41.244531 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:41.244555 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.244563 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.244569 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.247236 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:41.247254 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.247264 30593 round_trippers.go:580] Audit-Id: 0a7a1192-7352-4f99-a239-ebbd6ca40e85
I1101 00:09:41.247272 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.247279 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.247289 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.247298 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.247318 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.247449 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:41.247870 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:41.247882 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.247889 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.247894 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.250080 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:41.250098 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.250104 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.250109 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.250114 30593 round_trippers.go:580] Audit-Id: 629d69c5-3174-4a7d-aa0d-8f22f6d5b2f6
I1101 00:09:41.250130 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.250138 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.250146 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.250326 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:41.745038 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:41.745066 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.745074 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.745080 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.748544 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:41.748570 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.748581 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.748590 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.748598 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.748606 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.748625 30593 round_trippers.go:580] Audit-Id: b22bcb01-f5bf-4a1d-aad0-6c0ab2d577d4
I1101 00:09:41.748637 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.748855 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:41.749306 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:41.749318 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.749325 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.749331 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.755594 30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I1101 00:09:41.755639 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.755649 30593 round_trippers.go:580] Audit-Id: a64448a4-caec-4cfe-9700-2fbbc35230d2
I1101 00:09:41.755657 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.755665 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.755673 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.755680 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.755695 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.755860 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.244432 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:42.244456 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.244464 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.244470 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.247204 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.247227 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.247238 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.247247 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.247256 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.247267 30593 round_trippers.go:580] Audit-Id: 003f9883-5c30-40fd-aa1f-88b585473b07
I1101 00:09:42.247272 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.247278 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.247475 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
I1101 00:09:42.248064 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.248082 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.248093 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.248100 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.251135 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:42.251152 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.251158 30593 round_trippers.go:580] Audit-Id: 1d944e3b-2b90-4cb4-b54e-e4dc8e023493
I1101 00:09:42.251168 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.251172 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.251177 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.251182 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.251187 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.251385 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.251763 30593 pod_ready.go:92] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:42.251782 30593 pod_ready.go:81] duration metric: took 3.52008861s waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.251794 30593 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.251868 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
I1101 00:09:42.251880 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.251891 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.251901 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.253932 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.253950 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.253957 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.253962 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.253967 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.253975 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.253980 30593 round_trippers.go:580] Audit-Id: 8a73d4e8-1e4e-4883-908a-5c09ce62f8c3
I1101 00:09:42.253985 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.254150 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1227","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
I1101 00:09:42.254640 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.254655 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.254674 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.254685 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.256694 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:42.256708 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.256715 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.256723 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.256731 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.256740 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.256749 30593 round_trippers.go:580] Audit-Id: 4c1b620e-fff1-4494-89d2-83c513fc0fc0
I1101 00:09:42.256757 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.256951 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.257268 30593 pod_ready.go:92] pod "etcd-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:42.257283 30593 pod_ready.go:81] duration metric: took 5.477797ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.257306 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.257369 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:42.257379 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.257390 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.257399 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.259467 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.259483 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.259492 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.259499 30593 round_trippers.go:580] Audit-Id: 05d95e16-1d4e-4f81-a9d5-b2b141ff765d
I1101 00:09:42.259508 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.259517 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.259526 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.259535 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.259733 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:42.260255 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.260274 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.260281 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.260287 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.262250 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:42.262265 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.262275 30593 round_trippers.go:580] Audit-Id: ff748f0c-35a9-4061-b5ed-b0472309e27b
I1101 00:09:42.262282 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.262290 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.262298 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.262310 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.262318 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.262580 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.314176 30593 request.go:629] Waited for 51.260114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:42.314237 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:42.314242 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.314249 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.314256 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.317908 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:42.317937 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.317948 30593 round_trippers.go:580] Audit-Id: fa52f436-6e2b-418e-972d-6b4c1f1c0fcb
I1101 00:09:42.317957 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.317966 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.317971 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.317976 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.317984 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.318154 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:42.514148 30593 request.go:629] Waited for 195.42483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.514213 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.514221 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.514235 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.514291 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.516991 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.517017 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.517026 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.517035 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.517044 30593 round_trippers.go:580] Audit-Id: 71439942-ddcd-4159-8952-4d34c7b14582
I1101 00:09:42.517052 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.517059 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.517068 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.517221 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:43.018410 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:43.018439 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.018449 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.018459 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.021587 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:43.021609 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.021616 30593 round_trippers.go:580] Audit-Id: 7c4f42ca-82c7-4601-9dd3-7fa193eec32f
I1101 00:09:43.021621 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.021626 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.021631 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.021636 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.021642 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:43.021917 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:43.022342 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:43.022357 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.022368 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.022376 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.025247 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:43.025262 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.025268 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.025280 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.025289 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.025298 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.025310 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:43.025316 30593 round_trippers.go:580] Audit-Id: a4d1586f-de58-43b9-93f2-43b9726b8133
I1101 00:09:43.025864 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:43.518711 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:43.518737 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.518746 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.518752 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.521991 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:43.522017 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.522027 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.522036 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.522044 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:43.522058 30593 round_trippers.go:580] Audit-Id: ee145f23-1a35-4e40-acd4-1b329858fdfd
I1101 00:09:43.522065 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.522076 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.522321 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:43.522816 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:43.522832 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.522839 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.522845 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.525300 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:43.525321 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.525329 30593 round_trippers.go:580] Audit-Id: a16446ac-4c9e-462b-a604-37ce52442eb5
I1101 00:09:43.525336 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.525344 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.525351 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.525358 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.525365 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:43.525589 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:44.018504 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:44.018526 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.018534 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.018539 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.021345 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.021368 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.021379 30593 round_trippers.go:580] Audit-Id: 23afddaf-e391-4a40-9206-ba5a97021cd1
I1101 00:09:44.021389 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.021397 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.021402 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.021408 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.021413 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:44.021781 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:44.022178 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:44.022191 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.022201 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.022206 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.024358 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.024374 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.024380 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.024385 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.024390 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.024395 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:44.024400 30593 round_trippers.go:580] Audit-Id: 10d30ea6-f2a4-4468-b8d9-fe4d25cd5e9a
I1101 00:09:44.024404 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.024539 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:44.518209 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:44.518235 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.518243 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.518249 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.521184 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.521208 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.521218 30593 round_trippers.go:580] Audit-Id: fc8c6383-2699-422a-8176-ddcab44a9a9c
I1101 00:09:44.521238 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.521246 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.521255 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.521264 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.521273 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:44.521459 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:44.521894 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:44.521907 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.521914 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.521920 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.524063 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.524079 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.524085 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:44.524135 30593 round_trippers.go:580] Audit-Id: e14e26a5-28ca-4d3f-bae4-eea46c9e3a5b
I1101 00:09:44.524159 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.524167 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.524177 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.524182 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.524354 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:44.524642 30593 pod_ready.go:102] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"False"
I1101 00:09:45.017778 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:45.017807 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.017815 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.017822 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.021073 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:45.021103 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.021114 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.021124 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.021133 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.021142 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.021151 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:45.021160 30593 round_trippers.go:580] Audit-Id: 0dd2be34-8929-487b-8348-a144ffa6b941
I1101 00:09:45.021400 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:45.021872 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.021889 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.021897 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.021908 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.024844 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.024865 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.024874 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.024882 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.024889 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:45.024897 30593 round_trippers.go:580] Audit-Id: db32154e-ea80-4382-b7a1-53821506f75f
I1101 00:09:45.024905 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.024912 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.025668 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.518404 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:45.518429 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.518437 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.518442 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.521045 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.521065 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.521072 30593 round_trippers.go:580] Audit-Id: 32e5cb3c-6d81-4568-831d-7a0dc39dbca2
I1101 00:09:45.521077 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.521088 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.521093 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.521098 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.521103 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.521484 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1242","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
I1101 00:09:45.521900 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.521917 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.521924 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.521929 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.524067 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.524082 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.524088 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.524096 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.524104 30593 round_trippers.go:580] Audit-Id: 31736dc5-73c3-44fb-9ab2-5a9f73f0e730
I1101 00:09:45.524113 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.524121 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.524130 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.524429 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.524707 30593 pod_ready.go:92] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:45.524722 30593 pod_ready.go:81] duration metric: took 3.267408141s waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.524730 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.524780 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
I1101 00:09:45.524789 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.524796 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.524801 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.526609 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:45.526623 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.526629 30593 round_trippers.go:580] Audit-Id: c91e4f63-f1b9-4d99-b2a0-1ae44d4e3920
I1101 00:09:45.526634 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.526639 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.526644 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.526649 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.526654 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.526976 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1240","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
I1101 00:09:45.527354 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.527366 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.527373 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.527379 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.529038 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:45.529053 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.529064 30593 round_trippers.go:580] Audit-Id: 6d668043-98c8-4c98-9b23-07c7419995e3
I1101 00:09:45.529069 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.529074 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.529079 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.529084 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.529089 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.529310 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.529599 30593 pod_ready.go:92] pod "kube-controller-manager-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:45.529612 30593 pod_ready.go:81] duration metric: took 4.877104ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.529629 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.529698 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
I1101 00:09:45.529709 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.529717 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.529727 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.531667 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:45.531685 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.531694 30593 round_trippers.go:580] Audit-Id: 179e6548-b6dd-4972-8941-597dc0f20790
I1101 00:09:45.531703 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.531718 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.531724 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.531731 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.531737 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.532195 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
I1101 00:09:45.713849 30593 request.go:629] Waited for 181.057235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.713909 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.713914 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.713921 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.713927 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.716619 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.716637 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.716643 30593 round_trippers.go:580] Audit-Id: 426c242f-3496-4e53-8631-c1189b21932f
I1101 00:09:45.716649 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.716657 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.716665 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.716677 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.716689 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.716889 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.717308 30593 pod_ready.go:92] pod "kube-proxy-clsrp" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:45.717325 30593 pod_ready.go:81] duration metric: took 187.686843ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.717337 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.914796 30593 request.go:629] Waited for 197.399239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:45.914852 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:45.914857 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.914864 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.914871 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.917416 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.917445 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.917454 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.917462 30593 round_trippers.go:580] Audit-Id: 9cba40f3-3ad3-42a3-b93f-aa9cc6fc7dd3
I1101 00:09:45.917475 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.917480 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.917486 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.917492 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.917704 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
I1101 00:09:46.114598 30593 request.go:629] Waited for 196.375687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:46.114664 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:46.114691 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.114704 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.114710 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.117340 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:46.117362 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.117371 30593 round_trippers.go:580] Audit-Id: fc111c34-c570-4e3f-9832-d982a0432bc7
I1101 00:09:46.117379 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.117388 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.117396 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.117408 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.117421 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.117518 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
I1101 00:09:46.117775 30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:46.117792 30593 pod_ready.go:81] duration metric: took 400.44672ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:46.117804 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
I1101 00:09:46.314248 30593 request.go:629] Waited for 196.387545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:46.314341 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:46.314358 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.314369 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.314378 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.317400 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:46.317420 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.317429 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.317437 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.317445 30593 round_trippers.go:580] Audit-Id: feb64aac-545a-4487-be55-41e7c0e9ef0c
I1101 00:09:46.317454 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.317463 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.317473 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.317739 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1101 00:09:46.514556 30593 request.go:629] Waited for 196.355467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:46.514623 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:46.514630 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.514642 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.514652 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.517667 30593 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1101 00:09:46.517686 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.517695 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.517703 30593 round_trippers.go:580] Audit-Id: dee8bed2-39ff-4ddf-9b35-2afcacefb08c
I1101 00:09:46.517710 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.517717 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.517725 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.517732 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.517743 30593 round_trippers.go:580] Content-Length: 210
I1101 00:09:46.517769 30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
I1101 00:09:46.517879 30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:46.517896 30593 pod_ready.go:81] duration metric: took 400.083902ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
E1101 00:09:46.517909 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:46.517918 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:46.714359 30593 request.go:629] Waited for 196.368032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:46.714428 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:46.714439 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.714450 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.714460 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.717601 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:46.717622 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.717631 30593 round_trippers.go:580] Audit-Id: b10ec514-fb68-4eb7-a82b-478bb7b2615a
I1101 00:09:46.717638 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.717646 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.717653 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.717660 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.717669 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.718240 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1101 00:09:46.913939 30593 request.go:629] Waited for 195.310235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:46.913993 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:46.913998 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.914005 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.914018 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.916550 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:46.916574 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.916590 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.916598 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.916605 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.916613 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.916622 30593 round_trippers.go:580] Audit-Id: 3fdb3127-adb6-4b1b-973b-56d6f01c7510
I1101 00:09:46.916635 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.916797 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:47.114664 30593 request.go:629] Waited for 197.399091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:47.114755 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:47.114767 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.114785 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.114799 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.117780 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:47.117799 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.117806 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.117812 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.117817 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.117822 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.117827 30593 round_trippers.go:580] Audit-Id: 88a0065a-7184-46f2-bd0b-8a0b89e70b44
I1101 00:09:47.117841 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.118061 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1101 00:09:47.313739 30593 request.go:629] Waited for 195.316992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:47.313819 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:47.313832 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.313850 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.313863 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.317452 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:47.317480 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.317490 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.317498 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.317506 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.317514 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.317522 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.317530 30593 round_trippers.go:580] Audit-Id: 2e316d17-f6a0-43df-b21e-ef5ee4396440
I1101 00:09:47.317759 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:47.818890 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:47.818917 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.818925 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.818932 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.821524 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:47.821546 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.821558 30593 round_trippers.go:580] Audit-Id: 50ab8a02-fab8-41d2-abe4-e6fa324b51f1
I1101 00:09:47.821566 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.821574 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.821582 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.821590 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.821600 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.822014 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1244","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
I1101 00:09:47.822399 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:47.822414 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.822432 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.822440 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.825524 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:47.825549 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.825559 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.825568 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.825576 30593 round_trippers.go:580] Audit-Id: cff53b13-6010-47a4-94a7-bfaa8a544728
I1101 00:09:47.825584 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.825592 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.825600 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.825781 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:47.826104 30593 pod_ready.go:92] pod "kube-scheduler-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:47.826120 30593 pod_ready.go:81] duration metric: took 1.308189456s waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:47.826129 30593 pod_ready.go:38] duration metric: took 9.10408386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:47.826150 30593 api_server.go:52] waiting for apiserver process to appear ...
I1101 00:09:47.826195 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:47.838151 30593 command_runner.go:130] > 1704
I1101 00:09:47.838274 30593 api_server.go:72] duration metric: took 11.499995093s to wait for apiserver process to appear ...
I1101 00:09:47.838293 30593 api_server.go:88] waiting for apiserver healthz status ...
I1101 00:09:47.838314 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:47.844117 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
ok
I1101 00:09:47.844194 30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
I1101 00:09:47.844207 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.844218 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.844226 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.845412 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:47.845425 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.845431 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.845436 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.845442 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.845450 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.845463 30593 round_trippers.go:580] Content-Length: 264
I1101 00:09:47.845475 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.845485 30593 round_trippers.go:580] Audit-Id: 1468702f-2934-4914-b020-c0a4990038b1
I1101 00:09:47.845504 30593 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
}
I1101 00:09:47.845540 30593 api_server.go:141] control plane version: v1.28.3
I1101 00:09:47.845552 30593 api_server.go:131] duration metric: took 7.252944ms to wait for apiserver health ...
I1101 00:09:47.845562 30593 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 00:09:47.913821 30593 request.go:629] Waited for 68.174041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:47.913881 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:47.913885 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.913893 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.913899 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.918202 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:47.918230 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.918239 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.918248 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.918254 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.918259 30593 round_trippers.go:580] Audit-Id: b30ccebe-8256-4a7d-a462-7b4e1d0cdfa8
I1101 00:09:47.918264 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.918269 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.920031 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
I1101 00:09:47.922413 30593 system_pods.go:59] 12 kube-system pods found
I1101 00:09:47.922434 30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
I1101 00:09:47.922438 30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
I1101 00:09:47.922442 30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
I1101 00:09:47.922446 30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
I1101 00:09:47.922450 30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
I1101 00:09:47.922454 30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
I1101 00:09:47.922458 30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
I1101 00:09:47.922462 30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
I1101 00:09:47.922465 30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
I1101 00:09:47.922476 30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
I1101 00:09:47.922481 30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
I1101 00:09:47.922485 30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
I1101 00:09:47.922492 30593 system_pods.go:74] duration metric: took 76.924582ms to wait for pod list to return data ...
I1101 00:09:47.922513 30593 default_sa.go:34] waiting for default service account to be created ...
I1101 00:09:48.113860 30593 request.go:629] Waited for 191.269729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
I1101 00:09:48.113931 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
I1101 00:09:48.113936 30593 round_trippers.go:469] Request Headers:
I1101 00:09:48.113943 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:48.113949 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:48.117152 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:48.117173 30593 round_trippers.go:577] Response Headers:
I1101 00:09:48.117179 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:48.117184 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:48.117189 30593 round_trippers.go:580] Content-Length: 262
I1101 00:09:48.117194 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:48 GMT
I1101 00:09:48.117199 30593 round_trippers.go:580] Audit-Id: cf19f0f1-599a-4c01-a817-75c7ba89021a
I1101 00:09:48.117204 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:48.117209 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:48.117226 30593 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"331ecfcc-8852-4250-85c2-da77e5b314fe","resourceVersion":"364","creationTimestamp":"2023-11-01T00:02:33Z"}}]}
I1101 00:09:48.117391 30593 default_sa.go:45] found service account: "default"
I1101 00:09:48.117408 30593 default_sa.go:55] duration metric: took 194.889894ms for default service account to be created ...
I1101 00:09:48.117415 30593 system_pods.go:116] waiting for k8s-apps to be running ...
I1101 00:09:48.313818 30593 request.go:629] Waited for 196.325558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:48.313881 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:48.313886 30593 round_trippers.go:469] Request Headers:
I1101 00:09:48.313893 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:48.313899 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:48.317985 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:48.318004 30593 round_trippers.go:577] Response Headers:
I1101 00:09:48.318011 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:48.318018 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:48 GMT
I1101 00:09:48.318027 30593 round_trippers.go:580] Audit-Id: 7b682312-a373-4aac-a928-19f0e9f08ce4
I1101 00:09:48.318035 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:48.318042 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:48.318051 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:48.319258 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
I1101 00:09:48.321698 30593 system_pods.go:86] 12 kube-system pods found
I1101 00:09:48.321724 30593 system_pods.go:89] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
I1101 00:09:48.321729 30593 system_pods.go:89] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
I1101 00:09:48.321733 30593 system_pods.go:89] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
I1101 00:09:48.321739 30593 system_pods.go:89] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
I1101 00:09:48.321743 30593 system_pods.go:89] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
I1101 00:09:48.321747 30593 system_pods.go:89] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
I1101 00:09:48.321752 30593 system_pods.go:89] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
I1101 00:09:48.321756 30593 system_pods.go:89] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
I1101 00:09:48.321762 30593 system_pods.go:89] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
I1101 00:09:48.321765 30593 system_pods.go:89] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
I1101 00:09:48.321772 30593 system_pods.go:89] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
I1101 00:09:48.321777 30593 system_pods.go:89] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
I1101 00:09:48.321785 30593 system_pods.go:126] duration metric: took 204.365858ms to wait for k8s-apps to be running ...
I1101 00:09:48.321794 30593 system_svc.go:44] waiting for kubelet service to be running ....
I1101 00:09:48.321835 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 00:09:48.334581 30593 system_svc.go:56] duration metric: took 12.775415ms WaitForService to wait for kubelet.
I1101 00:09:48.334608 30593 kubeadm.go:581] duration metric: took 11.996332779s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1101 00:09:48.334634 30593 node_conditions.go:102] verifying NodePressure condition ...
I1101 00:09:48.514065 30593 request.go:629] Waited for 179.367734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
I1101 00:09:48.514131 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
I1101 00:09:48.514136 30593 round_trippers.go:469] Request Headers:
I1101 00:09:48.514144 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:48.514150 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:48.517017 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:48.517036 30593 round_trippers.go:577] Response Headers:
I1101 00:09:48.517043 30593 round_trippers.go:580] Audit-Id: acbda546-1395-4e94-a808-39a73ef2e8e6
I1101 00:09:48.517057 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:48.517063 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:48.517070 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:48.517077 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:48.517087 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:48 GMT
I1101 00:09:48.517358 30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9463 chars]
I1101 00:09:48.517853 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:48.517873 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:48.517883 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:48.517888 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:48.517892 30593 node_conditions.go:105] duration metric: took 183.255117ms to run NodePressure ...
I1101 00:09:48.517902 30593 start.go:228] waiting for startup goroutines ...
I1101 00:09:48.517913 30593 start.go:233] waiting for cluster config update ...
I1101 00:09:48.517918 30593 start.go:242] writing updated cluster config ...
I1101 00:09:48.518328 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:09:48.518400 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:09:48.521532 30593 out.go:177] * Starting worker node multinode-391061-m02 in cluster multinode-391061
I1101 00:09:48.522898 30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I1101 00:09:48.522933 30593 cache.go:56] Caching tarball of preloaded images
I1101 00:09:48.523028 30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1101 00:09:48.523039 30593 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker
I1101 00:09:48.523130 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:09:48.523306 30593 start.go:365] acquiring machines lock for multinode-391061-m02: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 00:09:48.523347 30593 start.go:369] acquired machines lock for "multinode-391061-m02" in 23.277µs
I1101 00:09:48.523360 30593 start.go:96] Skipping create...Using existing machine configuration
I1101 00:09:48.523365 30593 fix.go:54] fixHost starting: m02
I1101 00:09:48.523626 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:09:48.523657 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:09:48.538023 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
I1101 00:09:48.538553 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:09:48.539008 30593 main.go:141] libmachine: Using API Version 1
I1101 00:09:48.539038 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:09:48.539380 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:09:48.539558 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:09:48.539763 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetState
I1101 00:09:48.541362 30593 fix.go:102] recreateIfNeeded on multinode-391061-m02: state=Stopped err=<nil>
I1101 00:09:48.541381 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
W1101 00:09:48.541559 30593 fix.go:128] unexpected machine state, will restart: <nil>
I1101 00:09:48.543776 30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061-m02" ...
I1101 00:09:48.545357 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .Start
I1101 00:09:48.545519 30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring networks are active...
I1101 00:09:48.546142 30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network default is active
I1101 00:09:48.546521 30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network mk-multinode-391061 is active
I1101 00:09:48.546910 30593 main.go:141] libmachine: (multinode-391061-m02) Getting domain xml...
I1101 00:09:48.547503 30593 main.go:141] libmachine: (multinode-391061-m02) Creating domain...
I1101 00:09:49.771823 30593 main.go:141] libmachine: (multinode-391061-m02) Waiting to get IP...
I1101 00:09:49.772640 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:49.773071 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:49.773175 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:49.773074 30847 retry.go:31] will retry after 274.263244ms: waiting for machine to come up
I1101 00:09:50.048692 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:50.049124 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:50.049162 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.049076 30847 retry.go:31] will retry after 372.692246ms: waiting for machine to come up
I1101 00:09:50.423723 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:50.424163 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:50.424198 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.424109 30847 retry.go:31] will retry after 328.806363ms: waiting for machine to come up
I1101 00:09:50.754813 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:50.755280 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:50.755299 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.755254 30847 retry.go:31] will retry after 486.547371ms: waiting for machine to come up
I1101 00:09:51.243022 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:51.243428 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:51.243451 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.243379 30847 retry.go:31] will retry after 524.248371ms: waiting for machine to come up
I1101 00:09:51.769198 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:51.769648 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:51.769689 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.769606 30847 retry.go:31] will retry after 931.47967ms: waiting for machine to come up
I1101 00:09:52.703177 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:52.703627 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:52.703656 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:52.703550 30847 retry.go:31] will retry after 962.96473ms: waiting for machine to come up
I1101 00:09:53.668096 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:53.668562 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:53.668584 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:53.668516 30847 retry.go:31] will retry after 926.464487ms: waiting for machine to come up
I1101 00:09:54.596589 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:54.596929 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:54.596953 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:54.596883 30847 retry.go:31] will retry after 1.199020855s: waiting for machine to come up
I1101 00:09:55.797189 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:55.797717 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:55.797748 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:55.797665 30847 retry.go:31] will retry after 1.98043569s: waiting for machine to come up
I1101 00:09:57.780876 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:57.781471 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:57.781502 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:57.781409 30847 retry.go:31] will retry after 2.601288069s: waiting for machine to come up
I1101 00:10:00.385745 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:00.386332 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:10:00.386369 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:00.386242 30847 retry.go:31] will retry after 2.239008923s: waiting for machine to come up
I1101 00:10:02.627577 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:02.627955 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:10:02.627983 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:02.627920 30847 retry.go:31] will retry after 3.415765053s: waiting for machine to come up
I1101 00:10:06.046739 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.047249 30593 main.go:141] libmachine: (multinode-391061-m02) Found IP for machine: 192.168.39.249
I1101 00:10:06.047290 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has current primary IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.047305 30593 main.go:141] libmachine: (multinode-391061-m02) Reserving static IP address...
I1101 00:10:06.047763 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.047790 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"}
I1101 00:10:06.047800 30593 main.go:141] libmachine: (multinode-391061-m02) Reserved static IP address: 192.168.39.249
I1101 00:10:06.047814 30593 main.go:141] libmachine: (multinode-391061-m02) Waiting for SSH to be available...
I1101 00:10:06.047824 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Getting to WaitForSSH function...
I1101 00:10:06.049673 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.050046 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.050081 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.050222 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH client type: external
I1101 00:10:06.050261 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa (-rw-------)
I1101 00:10:06.050300 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I1101 00:10:06.050322 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | About to run SSH command:
I1101 00:10:06.050339 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | exit 0
I1101 00:10:06.146337 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | SSH cmd err, output: <nil>:
I1101 00:10:06.146696 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetConfigRaw
I1101 00:10:06.147450 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
I1101 00:10:06.149870 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.150236 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.150267 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.150541 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:10:06.150763 30593 machine.go:88] provisioning docker machine ...
I1101 00:10:06.150786 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:06.150984 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
I1101 00:10:06.151140 30593 buildroot.go:166] provisioning hostname "multinode-391061-m02"
I1101 00:10:06.151161 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
I1101 00:10:06.151315 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.153372 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.153742 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.153790 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.153926 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.154158 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.154347 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.154535 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.154739 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.155162 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.155179 30593 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-391061-m02 && echo "multinode-391061-m02" | sudo tee /etc/hostname
I1101 00:10:06.302682 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061-m02
I1101 00:10:06.302715 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.305443 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.305857 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.305883 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.306094 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.306306 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.306521 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.306659 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.306805 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.307269 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.307298 30593 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-391061-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-391061-m02' | sudo tee -a /etc/hosts;
fi
fi
I1101 00:10:06.448087 30593 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1101 00:10:06.448122 30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
I1101 00:10:06.448143 30593 buildroot.go:174] setting up certificates
I1101 00:10:06.448153 30593 provision.go:83] configureAuth start
I1101 00:10:06.448163 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
I1101 00:10:06.448466 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
I1101 00:10:06.451196 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.451596 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.451627 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.451812 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.453965 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.454286 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.454315 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.454535 30593 provision.go:138] copyHostCerts
I1101 00:10:06.454570 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:10:06.454601 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
I1101 00:10:06.454610 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:10:06.454674 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
I1101 00:10:06.454748 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:10:06.454767 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
I1101 00:10:06.454773 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:10:06.454796 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
I1101 00:10:06.454836 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:10:06.454852 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
I1101 00:10:06.454858 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:10:06.454876 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
I1101 00:10:06.454920 30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061-m02 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube multinode-391061-m02]
I1101 00:10:06.568585 30593 provision.go:172] copyRemoteCerts
I1101 00:10:06.568638 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 00:10:06.568659 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.571150 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.571450 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.571479 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.571687 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.571874 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.572047 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.572186 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:06.667838 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1101 00:10:06.667924 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 00:10:06.689930 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
I1101 00:10:06.689995 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1101 00:10:06.712213 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1101 00:10:06.712292 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 00:10:06.733879 30593 provision.go:86] duration metric: configureAuth took 285.714663ms
I1101 00:10:06.733904 30593 buildroot.go:189] setting minikube options for container-runtime
I1101 00:10:06.734094 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:10:06.734113 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:06.734377 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.736917 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.737314 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.737348 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.737503 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.737692 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.737870 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.738014 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.738189 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.738528 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.738541 30593 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 00:10:06.871826 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1101 00:10:06.871854 30593 buildroot.go:70] root file system type: tmpfs
I1101 00:10:06.872006 30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 00:10:06.872036 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.874568 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.874916 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.874940 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.875118 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.875315 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.875468 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.875569 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.875698 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.876002 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.876075 30593 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.43"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 00:10:07.020165 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.43
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 00:10:07.020194 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:07.022769 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.023132 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:07.023159 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.023341 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:07.023522 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:07.023707 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:07.023843 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:07.023996 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:07.024324 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:07.024341 30593 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 00:10:07.865650 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1101 00:10:07.865678 30593 machine.go:91] provisioned docker machine in 1.714900545s
I1101 00:10:07.865693 30593 start.go:300] post-start starting for "multinode-391061-m02" (driver="kvm2")
I1101 00:10:07.865707 30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 00:10:07.865730 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:07.866051 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 00:10:07.866082 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:07.868728 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.869111 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:07.869135 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.869295 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:07.869516 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:07.869672 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:07.869814 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:07.964822 30593 ssh_runner.go:195] Run: cat /etc/os-release
I1101 00:10:07.968645 30593 command_runner.go:130] > NAME=Buildroot
I1101 00:10:07.968665 30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
I1101 00:10:07.968672 30593 command_runner.go:130] > ID=buildroot
I1101 00:10:07.968681 30593 command_runner.go:130] > VERSION_ID=2021.02.12
I1101 00:10:07.968687 30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1101 00:10:07.968778 30593 info.go:137] Remote host: Buildroot 2021.02.12
I1101 00:10:07.968802 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
I1101 00:10:07.968861 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
I1101 00:10:07.968928 30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
I1101 00:10:07.968937 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
I1101 00:10:07.969013 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 00:10:07.978134 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
I1101 00:10:07.999912 30593 start.go:303] post-start completed in 134.20357ms
I1101 00:10:07.999936 30593 fix.go:56] fixHost completed within 19.476570148s
I1101 00:10:07.999956 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:08.002715 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.003077 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.003109 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.003255 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:08.003478 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.003658 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.003796 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:08.003977 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:08.004287 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:08.004297 30593 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1101 00:10:08.139625 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797408.091239350
I1101 00:10:08.139661 30593 fix.go:206] guest clock: 1698797408.091239350
I1101 00:10:08.139672 30593 fix.go:219] Guest: 2023-11-01 00:10:08.09123935 +0000 UTC Remote: 2023-11-01 00:10:07.999939094 +0000 UTC m=+78.350442936 (delta=91.300256ms)
I1101 00:10:08.139692 30593 fix.go:190] guest clock delta is within tolerance: 91.300256ms
I1101 00:10:08.139699 30593 start.go:83] releasing machines lock for "multinode-391061-m02", held for 19.616342127s
I1101 00:10:08.139723 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.140075 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
I1101 00:10:08.142846 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.143203 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.143246 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.145734 30593 out.go:177] * Found network options:
I1101 00:10:08.147426 30593 out.go:177] - NO_PROXY=192.168.39.43
W1101 00:10:08.148945 30593 proxy.go:119] fail to check proxy env: Error ip not in block
I1101 00:10:08.148990 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.149744 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.149992 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.150087 30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 00:10:08.150122 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
W1101 00:10:08.150204 30593 proxy.go:119] fail to check proxy env: Error ip not in block
I1101 00:10:08.150272 30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1101 00:10:08.150293 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:08.153130 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153377 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153609 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.153633 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153818 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.153840 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153853 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:08.154005 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:08.154068 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.154141 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.154205 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:08.154260 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:08.154322 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:08.154355 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:08.266696 30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1101 00:10:08.266764 30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1101 00:10:08.266798 30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 00:10:08.266854 30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 00:10:08.282630 30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1101 00:10:08.282695 30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 00:10:08.282708 30593 start.go:472] detecting cgroup driver to use...
I1101 00:10:08.282848 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:10:08.299593 30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1101 00:10:08.299879 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1101 00:10:08.309962 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1101 00:10:08.319802 30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1101 00:10:08.319855 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1101 00:10:08.329984 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:10:08.340324 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1101 00:10:08.350388 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:10:08.360362 30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 00:10:08.370630 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1101 00:10:08.380841 30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 00:10:08.389848 30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1101 00:10:08.389933 30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 00:10:08.398827 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:10:08.509909 30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 00:10:08.527202 30593 start.go:472] detecting cgroup driver to use...
I1101 00:10:08.527267 30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 00:10:08.539911 30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1101 00:10:08.540831 30593 command_runner.go:130] > [Unit]
I1101 00:10:08.540847 30593 command_runner.go:130] > Description=Docker Application Container Engine
I1101 00:10:08.540853 30593 command_runner.go:130] > Documentation=https://docs.docker.com
I1101 00:10:08.540859 30593 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1101 00:10:08.540864 30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1101 00:10:08.540873 30593 command_runner.go:130] > StartLimitBurst=3
I1101 00:10:08.540880 30593 command_runner.go:130] > StartLimitIntervalSec=60
I1101 00:10:08.540884 30593 command_runner.go:130] > [Service]
I1101 00:10:08.540890 30593 command_runner.go:130] > Type=notify
I1101 00:10:08.540899 30593 command_runner.go:130] > Restart=on-failure
I1101 00:10:08.540906 30593 command_runner.go:130] > Environment=NO_PROXY=192.168.39.43
I1101 00:10:08.540915 30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1101 00:10:08.540932 30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1101 00:10:08.540943 30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1101 00:10:08.540952 30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1101 00:10:08.540961 30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1101 00:10:08.540970 30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1101 00:10:08.540980 30593 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1101 00:10:08.540993 30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1101 00:10:08.541002 30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1101 00:10:08.541009 30593 command_runner.go:130] > ExecStart=
I1101 00:10:08.541024 30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1101 00:10:08.541035 30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1101 00:10:08.541042 30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1101 00:10:08.541051 30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1101 00:10:08.541057 30593 command_runner.go:130] > LimitNOFILE=infinity
I1101 00:10:08.541062 30593 command_runner.go:130] > LimitNPROC=infinity
I1101 00:10:08.541066 30593 command_runner.go:130] > LimitCORE=infinity
I1101 00:10:08.541073 30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1101 00:10:08.541080 30593 command_runner.go:130] > # Only systemd 226 and above support this version.
I1101 00:10:08.541087 30593 command_runner.go:130] > TasksMax=infinity
I1101 00:10:08.541091 30593 command_runner.go:130] > TimeoutStartSec=0
I1101 00:10:08.541100 30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1101 00:10:08.541106 30593 command_runner.go:130] > Delegate=yes
I1101 00:10:08.541112 30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1101 00:10:08.541122 30593 command_runner.go:130] > KillMode=process
I1101 00:10:08.541128 30593 command_runner.go:130] > [Install]
I1101 00:10:08.541133 30593 command_runner.go:130] > WantedBy=multi-user.target
I1101 00:10:08.541558 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:10:08.556173 30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 00:10:08.575016 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:10:08.587990 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:10:08.601691 30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 00:10:08.631342 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:10:08.644194 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:10:08.661548 30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1101 00:10:08.662099 30593 ssh_runner.go:195] Run: which cri-dockerd
I1101 00:10:08.665592 30593 command_runner.go:130] > /usr/bin/cri-dockerd
I1101 00:10:08.665782 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1101 00:10:08.674228 30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1101 00:10:08.690202 30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 00:10:08.793665 30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 00:10:08.913029 30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
I1101 00:10:08.913074 30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1101 00:10:08.928591 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:10:09.029624 30593 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 00:10:10.439233 30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.409560046s)
I1101 00:10:10.439309 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:10:10.540266 30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1101 00:10:10.657292 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:10:10.768655 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:10:10.871570 30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1101 00:10:10.887421 30593 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I1101 00:10:10.889772 30593 out.go:177]
W1101 00:10:10.891480 30593 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W1101 00:10:10.891500 30593 out.go:239] *
*
W1101 00:10:10.892409 30593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 00:10:10.894220 30593 out.go:177]
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-391061 -n multinode-391061
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-391061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-391061 logs -n 25: (1.289304328s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
| cp | multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061:/home/docker/cp-test_multinode-391061-m02_multinode-391061.txt | | | | | |
| ssh | multinode-391061 ssh -n | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-391061 ssh -n multinode-391061 sudo cat | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | /home/docker/cp-test_multinode-391061-m02_multinode-391061.txt | | | | | |
| cp | multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m03:/home/docker/cp-test_multinode-391061-m02_multinode-391061-m03.txt | | | | | |
| ssh | multinode-391061 ssh -n | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-391061 ssh -n multinode-391061-m03 sudo cat | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | /home/docker/cp-test_multinode-391061-m02_multinode-391061-m03.txt | | | | | |
| cp | multinode-391061 cp testdata/cp-test.txt | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-391061 ssh -n | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | /tmp/TestMultiNodeserialCopyFile415772365/001/cp-test_multinode-391061-m03.txt | | | | | |
| ssh | multinode-391061 ssh -n | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061:/home/docker/cp-test_multinode-391061-m03_multinode-391061.txt | | | | | |
| ssh | multinode-391061 ssh -n | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-391061 ssh -n multinode-391061 sudo cat | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | /home/docker/cp-test_multinode-391061-m03_multinode-391061.txt | | | | | |
| cp | multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m02:/home/docker/cp-test_multinode-391061-m03_multinode-391061-m02.txt | | | | | |
| ssh | multinode-391061 ssh -n | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | multinode-391061-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-391061 ssh -n multinode-391061-m02 sudo cat | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| | /home/docker/cp-test_multinode-391061-m03_multinode-391061-m02.txt | | | | | |
| node | multinode-391061 node stop m03 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
| node | multinode-391061 node start | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:05 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-391061 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:05 UTC | |
| stop | -p multinode-391061 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:05 UTC | 01 Nov 23 00:05 UTC |
| start | -p multinode-391061 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:05 UTC | 01 Nov 23 00:08 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-391061 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC | |
| node | multinode-391061 node delete | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC | 01 Nov 23 00:08 UTC |
| | m03 | | | | | |
| stop | multinode-391061 stop | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC | 01 Nov 23 00:08 UTC |
| start | -p multinode-391061 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/11/01 00:08:49
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.21.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 00:08:49.696747 30593 out.go:296] Setting OutFile to fd 1 ...
I1101 00:08:49.696976 30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:08:49.696984 30593 out.go:309] Setting ErrFile to fd 2...
I1101 00:08:49.696989 30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:08:49.697199 30593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1101 00:08:49.697724 30593 out.go:303] Setting JSON to false
I1101 00:08:49.698581 30593 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3079,"bootTime":1698794251,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 00:08:49.698643 30593 start.go:138] virtualization: kvm guest
I1101 00:08:49.701257 30593 out.go:177] * [multinode-391061] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
I1101 00:08:49.702839 30593 out.go:177] - MINIKUBE_LOCATION=17486
I1101 00:08:49.702844 30593 notify.go:220] Checking for updates...
I1101 00:08:49.704612 30593 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 00:08:49.706320 30593 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:08:49.707852 30593 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
I1101 00:08:49.709325 30593 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 00:08:49.710727 30593 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1101 00:08:49.712746 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:08:49.713116 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:08:49.713162 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:08:49.727252 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
I1101 00:08:49.727584 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:08:49.728056 30593 main.go:141] libmachine: Using API Version 1
I1101 00:08:49.728075 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:08:49.728412 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:08:49.728601 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:08:49.728809 30593 driver.go:378] Setting default libvirt URI to qemu:///system
I1101 00:08:49.729119 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:08:49.729158 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:08:49.742929 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
I1101 00:08:49.743302 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:08:49.743756 30593 main.go:141] libmachine: Using API Version 1
I1101 00:08:49.743779 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:08:49.744063 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:08:49.744234 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:08:49.779391 30593 out.go:177] * Using the kvm2 driver based on existing profile
I1101 00:08:49.780999 30593 start.go:298] selected driver: kvm2
I1101 00:08:49.781015 30593 start.go:902] validating driver "kvm2" against &{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false k
ubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1101 00:08:49.781172 30593 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 00:08:49.781470 30593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 00:08:49.781541 30593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7251/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1101 00:08:49.796518 30593 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
I1101 00:08:49.797197 30593 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 00:08:49.797254 30593 cni.go:84] Creating CNI manager for ""
I1101 00:08:49.797263 30593 cni.go:136] 2 nodes found, recommending kindnet
I1101 00:08:49.797274 30593 start_flags.go:323] config:
{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1101 00:08:49.797449 30593 iso.go:125] acquiring lock: {Name:mk56e0e42e3cb427bae1fd4521b75db693021ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 00:08:49.799445 30593 out.go:177] * Starting control plane node multinode-391061 in cluster multinode-391061
I1101 00:08:49.802107 30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I1101 00:08:49.802154 30593 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
I1101 00:08:49.802163 30593 cache.go:56] Caching tarball of preloaded images
I1101 00:08:49.802239 30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1101 00:08:49.802251 30593 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker
I1101 00:08:49.802383 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:08:49.802605 30593 start.go:365] acquiring machines lock for multinode-391061: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 00:08:49.802660 30593 start.go:369] acquired machines lock for "multinode-391061" in 32.142µs
I1101 00:08:49.802683 30593 start.go:96] Skipping create...Using existing machine configuration
I1101 00:08:49.802692 30593 fix.go:54] fixHost starting:
I1101 00:08:49.802950 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:08:49.802988 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:08:49.817041 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
I1101 00:08:49.817426 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:08:49.817852 30593 main.go:141] libmachine: Using API Version 1
I1101 00:08:49.817876 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:08:49.818147 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:08:49.818268 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:08:49.818364 30593 main.go:141] libmachine: (multinode-391061) Calling .GetState
I1101 00:08:49.819780 30593 fix.go:102] recreateIfNeeded on multinode-391061: state=Stopped err=<nil>
I1101 00:08:49.819798 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
W1101 00:08:49.819945 30593 fix.go:128] unexpected machine state, will restart: <nil>
I1101 00:08:49.822198 30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061" ...
I1101 00:08:49.823675 30593 main.go:141] libmachine: (multinode-391061) Calling .Start
I1101 00:08:49.823836 30593 main.go:141] libmachine: (multinode-391061) Ensuring networks are active...
I1101 00:08:49.824527 30593 main.go:141] libmachine: (multinode-391061) Ensuring network default is active
I1101 00:08:49.824903 30593 main.go:141] libmachine: (multinode-391061) Ensuring network mk-multinode-391061 is active
I1101 00:08:49.825231 30593 main.go:141] libmachine: (multinode-391061) Getting domain xml...
I1101 00:08:49.825825 30593 main.go:141] libmachine: (multinode-391061) Creating domain...
I1101 00:08:51.072133 30593 main.go:141] libmachine: (multinode-391061) Waiting to get IP...
I1101 00:08:51.072978 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.073561 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.073673 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.073534 30629 retry.go:31] will retry after 229.675258ms: waiting for machine to come up
I1101 00:08:51.305068 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.305486 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.305513 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.305442 30629 retry.go:31] will retry after 372.862383ms: waiting for machine to come up
I1101 00:08:51.680135 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.680628 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.680663 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.680610 30629 retry.go:31] will retry after 314.755115ms: waiting for machine to come up
I1101 00:08:51.997095 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:51.997485 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:51.997516 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.997452 30629 retry.go:31] will retry after 376.70772ms: waiting for machine to come up
I1101 00:08:52.376191 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:52.376728 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:52.376768 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.376689 30629 retry.go:31] will retry after 583.291159ms: waiting for machine to come up
I1101 00:08:52.961471 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:52.961889 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:52.961920 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.961826 30629 retry.go:31] will retry after 803.566491ms: waiting for machine to come up
I1101 00:08:53.766791 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:53.767211 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:53.767251 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:53.767153 30629 retry.go:31] will retry after 1.032833525s: waiting for machine to come up
I1101 00:08:54.801328 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:54.801700 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:54.801734 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:54.801656 30629 retry.go:31] will retry after 1.044435025s: waiting for machine to come up
I1101 00:08:55.847409 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:55.847850 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:55.847874 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:55.847797 30629 retry.go:31] will retry after 1.41464542s: waiting for machine to come up
I1101 00:08:57.264298 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:57.264621 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:57.264658 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:57.264585 30629 retry.go:31] will retry after 1.783339985s: waiting for machine to come up
I1101 00:08:59.050737 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:08:59.051258 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:08:59.051280 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:59.051209 30629 retry.go:31] will retry after 2.24727828s: waiting for machine to come up
I1101 00:09:01.300675 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:01.301123 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:09:01.301147 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:01.301080 30629 retry.go:31] will retry after 2.659318668s: waiting for machine to come up
I1101 00:09:03.964050 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:03.964412 30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
I1101 00:09:03.964433 30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:03.964369 30629 retry.go:31] will retry after 4.002549509s: waiting for machine to come up
I1101 00:09:07.970570 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.970947 30593 main.go:141] libmachine: (multinode-391061) Found IP for machine: 192.168.39.43
I1101 00:09:07.970973 30593 main.go:141] libmachine: (multinode-391061) Reserving static IP address...
I1101 00:09:07.970988 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.971417 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:07.971446 30593 main.go:141] libmachine: (multinode-391061) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"}
I1101 00:09:07.971454 30593 main.go:141] libmachine: (multinode-391061) Reserved static IP address: 192.168.39.43
I1101 00:09:07.971463 30593 main.go:141] libmachine: (multinode-391061) Waiting for SSH to be available...
I1101 00:09:07.971472 30593 main.go:141] libmachine: (multinode-391061) DBG | Getting to WaitForSSH function...
I1101 00:09:07.973244 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.973598 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:07.973629 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:07.973785 30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH client type: external
I1101 00:09:07.973815 30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa (-rw-------)
I1101 00:09:07.973859 30593 main.go:141] libmachine: (multinode-391061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa -p 22] /usr/bin/ssh <nil>}
I1101 00:09:07.973884 30593 main.go:141] libmachine: (multinode-391061) DBG | About to run SSH command:
I1101 00:09:07.973895 30593 main.go:141] libmachine: (multinode-391061) DBG | exit 0
I1101 00:09:08.070105 30593 main.go:141] libmachine: (multinode-391061) DBG | SSH cmd err, output: <nil>:
I1101 00:09:08.070483 30593 main.go:141] libmachine: (multinode-391061) Calling .GetConfigRaw
I1101 00:09:08.071216 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:08.073614 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.074025 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.074060 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.074285 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:09:08.074479 30593 machine.go:88] provisioning docker machine ...
I1101 00:09:08.074512 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:08.074714 30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
I1101 00:09:08.074856 30593 buildroot.go:166] provisioning hostname "multinode-391061"
I1101 00:09:08.074870 30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
I1101 00:09:08.074990 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.077098 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.077410 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.077452 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.077575 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.077739 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.077899 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.078007 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.078153 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.078494 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.078529 30593 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-391061 && echo "multinode-391061" | sudo tee /etc/hostname
I1101 00:09:08.217944 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061
I1101 00:09:08.217967 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.220671 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.220963 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.221024 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.221089 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.221295 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.221466 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.221616 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.221803 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.222253 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.222280 30593 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-391061' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061/g' /etc/hosts;
else
echo '127.0.1.1 multinode-391061' | sudo tee -a /etc/hosts;
fi
fi
I1101 00:09:08.359049 30593 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1101 00:09:08.359078 30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
I1101 00:09:08.359096 30593 buildroot.go:174] setting up certificates
I1101 00:09:08.359104 30593 provision.go:83] configureAuth start
I1101 00:09:08.359112 30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
I1101 00:09:08.359381 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:08.361931 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.362234 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.362269 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.362374 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.364658 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.364936 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.364968 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.365105 30593 provision.go:138] copyHostCerts
I1101 00:09:08.365133 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:09:08.365172 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
I1101 00:09:08.365183 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:09:08.365248 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
I1101 00:09:08.365344 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:09:08.365365 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
I1101 00:09:08.365372 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:09:08.365399 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
I1101 00:09:08.365452 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:09:08.365467 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
I1101 00:09:08.365473 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:09:08.365494 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
I1101 00:09:08.365549 30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061 san=[192.168.39.43 192.168.39.43 localhost 127.0.0.1 minikube multinode-391061]
I1101 00:09:08.497882 30593 provision.go:172] copyRemoteCerts
I1101 00:09:08.497940 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 00:09:08.497965 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.500598 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.500931 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.500961 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.501176 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.501356 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.501513 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.501639 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:08.594935 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1101 00:09:08.594993 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 00:09:08.617737 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1101 00:09:08.617835 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 00:09:08.639923 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
I1101 00:09:08.640003 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1101 00:09:08.662129 30593 provision.go:86] duration metric: configureAuth took 303.015088ms
I1101 00:09:08.662155 30593 buildroot.go:189] setting minikube options for container-runtime
I1101 00:09:08.662403 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:09:08.662426 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:08.662704 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.665367 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.665756 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.665781 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.665918 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.666128 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.666300 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.666449 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.666613 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.666928 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.666940 30593 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 00:09:08.795906 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1101 00:09:08.795936 30593 buildroot.go:70] root file system type: tmpfs
I1101 00:09:08.796096 30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 00:09:08.796134 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.798879 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.799232 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.799265 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.799423 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.799598 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.799753 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.799868 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.800041 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.800361 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.800421 30593 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 00:09:08.942805 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 00:09:08.942844 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:08.945908 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.946293 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:08.946326 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:08.946513 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:08.946689 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.946882 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:08.947001 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:08.947184 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:08.947647 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:08.947681 30593 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 00:09:09.848694 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1101 00:09:09.848722 30593 machine.go:91] provisioned docker machine in 1.774228913s
I1101 00:09:09.848735 30593 start.go:300] post-start starting for "multinode-391061" (driver="kvm2")
I1101 00:09:09.848748 30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 00:09:09.848772 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:09.849087 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 00:09:09.849113 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:09.851810 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.852197 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:09.852243 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.852386 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:09.852556 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:09.852728 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:09.852822 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:09.947639 30593 ssh_runner.go:195] Run: cat /etc/os-release
I1101 00:09:09.951509 30593 command_runner.go:130] > NAME=Buildroot
I1101 00:09:09.951530 30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
I1101 00:09:09.951535 30593 command_runner.go:130] > ID=buildroot
I1101 00:09:09.951542 30593 command_runner.go:130] > VERSION_ID=2021.02.12
I1101 00:09:09.951549 30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1101 00:09:09.951586 30593 info.go:137] Remote host: Buildroot 2021.02.12
I1101 00:09:09.951598 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
I1101 00:09:09.951663 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
I1101 00:09:09.951768 30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
I1101 00:09:09.951785 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
I1101 00:09:09.951898 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 00:09:09.959594 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
I1101 00:09:09.981962 30593 start.go:303] post-start completed in 133.213964ms
I1101 00:09:09.982003 30593 fix.go:56] fixHost completed within 20.179294964s
I1101 00:09:09.982027 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:09.984776 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.985223 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:09.985252 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:09.985386 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:09.985595 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:09.985729 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:09.985860 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:09.985979 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:09:09.986435 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.43 22 <nil> <nil>}
I1101 00:09:09.986451 30593 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1101 00:09:10.119733 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797350.071514552
I1101 00:09:10.119761 30593 fix.go:206] guest clock: 1698797350.071514552
I1101 00:09:10.119769 30593 fix.go:219] Guest: 2023-11-01 00:09:10.071514552 +0000 UTC Remote: 2023-11-01 00:09:09.982007618 +0000 UTC m=+20.332511469 (delta=89.506934ms)
I1101 00:09:10.119793 30593 fix.go:190] guest clock delta is within tolerance: 89.506934ms
I1101 00:09:10.119800 30593 start.go:83] releasing machines lock for "multinode-391061", held for 20.317128044s
I1101 00:09:10.119826 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.120083 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:10.122834 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.123267 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:10.123301 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.123482 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.124067 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.124267 30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
I1101 00:09:10.124386 30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 00:09:10.124433 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:10.124459 30593 ssh_runner.go:195] Run: cat /version.json
I1101 00:09:10.124497 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
I1101 00:09:10.127197 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127360 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127632 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:10.127661 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127789 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:10.127807 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:10.127837 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:10.127985 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
I1101 00:09:10.127991 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:10.128201 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:10.128203 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
I1101 00:09:10.128392 30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
I1101 00:09:10.128400 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:10.128527 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
I1101 00:09:10.219062 30593 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
I1101 00:09:10.244630 30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1101 00:09:10.245754 30593 ssh_runner.go:195] Run: systemctl --version
I1101 00:09:10.251311 30593 command_runner.go:130] > systemd 247 (247)
I1101 00:09:10.251350 30593 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I1101 00:09:10.251621 30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1101 00:09:10.256782 30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1101 00:09:10.256835 30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 00:09:10.256887 30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 00:09:10.271406 30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1101 00:09:10.271460 30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 00:09:10.271470 30593 start.go:472] detecting cgroup driver to use...
I1101 00:09:10.271565 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:09:10.288462 30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1101 00:09:10.288546 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1101 00:09:10.298090 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1101 00:09:10.307653 30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1101 00:09:10.307716 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1101 00:09:10.317073 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:09:10.326800 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1101 00:09:10.336055 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:09:10.345573 30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 00:09:10.355553 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1101 00:09:10.365472 30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 00:09:10.373896 30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1101 00:09:10.374055 30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 00:09:10.382414 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:10.484557 30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 00:09:10.503546 30593 start.go:472] detecting cgroup driver to use...
I1101 00:09:10.503677 30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 00:09:10.516143 30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1101 00:09:10.517085 30593 command_runner.go:130] > [Unit]
I1101 00:09:10.517117 30593 command_runner.go:130] > Description=Docker Application Container Engine
I1101 00:09:10.517127 30593 command_runner.go:130] > Documentation=https://docs.docker.com
I1101 00:09:10.517135 30593 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1101 00:09:10.517143 30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1101 00:09:10.517151 30593 command_runner.go:130] > StartLimitBurst=3
I1101 00:09:10.517159 30593 command_runner.go:130] > StartLimitIntervalSec=60
I1101 00:09:10.517169 30593 command_runner.go:130] > [Service]
I1101 00:09:10.517175 30593 command_runner.go:130] > Type=notify
I1101 00:09:10.517185 30593 command_runner.go:130] > Restart=on-failure
I1101 00:09:10.517197 30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1101 00:09:10.517218 30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1101 00:09:10.517247 30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1101 00:09:10.517256 30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1101 00:09:10.517266 30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1101 00:09:10.517276 30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1101 00:09:10.517285 30593 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1101 00:09:10.517306 30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1101 00:09:10.517318 30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1101 00:09:10.517328 30593 command_runner.go:130] > ExecStart=
I1101 00:09:10.517356 30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1101 00:09:10.517369 30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1101 00:09:10.517383 30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1101 00:09:10.517397 30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1101 00:09:10.517408 30593 command_runner.go:130] > LimitNOFILE=infinity
I1101 00:09:10.517415 30593 command_runner.go:130] > LimitNPROC=infinity
I1101 00:09:10.517425 30593 command_runner.go:130] > LimitCORE=infinity
I1101 00:09:10.517433 30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1101 00:09:10.517441 30593 command_runner.go:130] > # Only systemd 226 and above support this version.
I1101 00:09:10.517447 30593 command_runner.go:130] > TasksMax=infinity
I1101 00:09:10.517454 30593 command_runner.go:130] > TimeoutStartSec=0
I1101 00:09:10.517463 30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1101 00:09:10.517469 30593 command_runner.go:130] > Delegate=yes
I1101 00:09:10.517477 30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1101 00:09:10.517488 30593 command_runner.go:130] > KillMode=process
I1101 00:09:10.517502 30593 command_runner.go:130] > [Install]
I1101 00:09:10.517521 30593 command_runner.go:130] > WantedBy=multi-user.target
I1101 00:09:10.517760 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:09:10.537353 30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 00:09:10.559962 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:09:10.572863 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:09:10.585294 30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 00:09:10.613156 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:09:10.626018 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:09:10.642949 30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1101 00:09:10.643493 30593 ssh_runner.go:195] Run: which cri-dockerd
I1101 00:09:10.647034 30593 command_runner.go:130] > /usr/bin/cri-dockerd
I1101 00:09:10.647148 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1101 00:09:10.656096 30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1101 00:09:10.672510 30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 00:09:10.775493 30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 00:09:10.890922 30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
I1101 00:09:10.891096 30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1101 00:09:10.911224 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:11.028462 30593 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 00:09:12.495501 30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.467002879s)
I1101 00:09:12.495587 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:09:12.596857 30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1101 00:09:12.696859 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:09:12.818695 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:12.925882 30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1101 00:09:12.942696 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:09:13.046788 30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1101 00:09:13.125894 30593 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1101 00:09:13.125989 30593 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1101 00:09:13.131383 30593 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I1101 00:09:13.131401 30593 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I1101 00:09:13.131407 30593 command_runner.go:130] > Device: 16h/22d Inode: 823 Links: 1
I1101 00:09:13.131414 30593 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I1101 00:09:13.131420 30593 command_runner.go:130] > Access: 2023-11-01 00:09:13.012751521 +0000
I1101 00:09:13.131425 30593 command_runner.go:130] > Modify: 2023-11-01 00:09:13.012751521 +0000
I1101 00:09:13.131432 30593 command_runner.go:130] > Change: 2023-11-01 00:09:13.015751521 +0000
I1101 00:09:13.131448 30593 command_runner.go:130] > Birth: -
I1101 00:09:13.131608 30593 start.go:540] Will wait 60s for crictl version
I1101 00:09:13.131663 30593 ssh_runner.go:195] Run: which crictl
I1101 00:09:13.135151 30593 command_runner.go:130] > /usr/bin/crictl
I1101 00:09:13.135210 30593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1101 00:09:13.203365 30593 command_runner.go:130] > Version: 0.1.0
I1101 00:09:13.203385 30593 command_runner.go:130] > RuntimeName: docker
I1101 00:09:13.203397 30593 command_runner.go:130] > RuntimeVersion: 24.0.6
I1101 00:09:13.203407 30593 command_runner.go:130] > RuntimeApiVersion: v1
I1101 00:09:13.203445 30593 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1
I1101 00:09:13.203500 30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 00:09:13.228282 30593 command_runner.go:130] > 24.0.6
I1101 00:09:13.228417 30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 00:09:13.252487 30593 command_runner.go:130] > 24.0.6
I1101 00:09:13.254840 30593 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
I1101 00:09:13.254880 30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
I1101 00:09:13.257487 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:13.257845 30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
I1101 00:09:13.257879 30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
I1101 00:09:13.258035 30593 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1101 00:09:13.261869 30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 00:09:13.272965 30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I1101 00:09:13.273017 30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 00:09:13.291973 30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
I1101 00:09:13.292012 30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
I1101 00:09:13.292018 30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
I1101 00:09:13.292023 30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
I1101 00:09:13.292028 30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1101 00:09:13.292033 30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1101 00:09:13.292039 30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1101 00:09:13.292046 30593 command_runner.go:130] > registry.k8s.io/pause:3.9
I1101 00:09:13.292051 30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1101 00:09:13.292058 30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1101 00:09:13.292659 30593 docker.go:699] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1101 00:09:13.292679 30593 docker.go:629] Images already preloaded, skipping extraction
I1101 00:09:13.292737 30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 00:09:13.311772 30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
I1101 00:09:13.311797 30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
I1101 00:09:13.311806 30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
I1101 00:09:13.311814 30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
I1101 00:09:13.311821 30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1101 00:09:13.311826 30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1101 00:09:13.311831 30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1101 00:09:13.311836 30593 command_runner.go:130] > registry.k8s.io/pause:3.9
I1101 00:09:13.311841 30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1101 00:09:13.311857 30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1101 00:09:13.311882 30593 docker.go:699] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1101 00:09:13.311900 30593 cache_images.go:84] Images are preloaded, skipping loading
I1101 00:09:13.311963 30593 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1101 00:09:13.336389 30593 command_runner.go:130] > cgroupfs
I1101 00:09:13.336458 30593 cni.go:84] Creating CNI manager for ""
I1101 00:09:13.336469 30593 cni.go:136] 2 nodes found, recommending kindnet
I1101 00:09:13.336493 30593 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 00:09:13.336521 30593 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-391061 NodeName:multinode-391061 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1101 00:09:13.336694 30593 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.43
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-391061"
kubeletExtraArgs:
node-ip: 192.168.39.43
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 00:09:13.336788 30593 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-391061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
[Install]
config:
{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 00:09:13.336851 30593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
I1101 00:09:13.346367 30593 command_runner.go:130] > kubeadm
I1101 00:09:13.346390 30593 command_runner.go:130] > kubectl
I1101 00:09:13.346396 30593 command_runner.go:130] > kubelet
I1101 00:09:13.346518 30593 binaries.go:44] Found k8s binaries, skipping transfer
I1101 00:09:13.346594 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 00:09:13.355275 30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I1101 00:09:13.370971 30593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 00:09:13.387036 30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I1101 00:09:13.402440 30593 ssh_runner.go:195] Run: grep 192.168.39.43 control-plane.minikube.internal$ /etc/hosts
I1101 00:09:13.406022 30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 00:09:13.417070 30593 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061 for IP: 192.168.39.43
I1101 00:09:13.417103 30593 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:13.417247 30593 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
I1101 00:09:13.417296 30593 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
I1101 00:09:13.417388 30593 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key
I1101 00:09:13.417450 30593 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key.7e75dda5
I1101 00:09:13.417508 30593 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key
I1101 00:09:13.417523 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1101 00:09:13.417544 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1101 00:09:13.417575 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1101 00:09:13.417593 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1101 00:09:13.417603 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1101 00:09:13.417615 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1101 00:09:13.417625 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1101 00:09:13.417636 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1101 00:09:13.417690 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
W1101 00:09:13.417720 30593 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
I1101 00:09:13.417729 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
I1101 00:09:13.417752 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
I1101 00:09:13.417776 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
I1101 00:09:13.417804 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
I1101 00:09:13.417847 30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
I1101 00:09:13.417870 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem -> /usr/share/ca-certificates/14463.pem
I1101 00:09:13.417882 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /usr/share/ca-certificates/144632.pem
I1101 00:09:13.417894 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.418474 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 00:09:13.440131 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 00:09:13.461354 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 00:09:13.484158 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 00:09:13.507642 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 00:09:13.530560 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1101 00:09:13.552173 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 00:09:13.572803 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 00:09:13.594200 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
I1101 00:09:13.614546 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
I1101 00:09:13.635287 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 00:09:13.655804 30593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I1101 00:09:13.671160 30593 ssh_runner.go:195] Run: openssl version
I1101 00:09:13.676595 30593 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I1101 00:09:13.676661 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
I1101 00:09:13.687719 30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
I1101 00:09:13.692306 30593 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
I1101 00:09:13.692356 30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
I1101 00:09:13.692398 30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
I1101 00:09:13.697913 30593 command_runner.go:130] > 51391683
I1101 00:09:13.698156 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
I1101 00:09:13.708708 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
I1101 00:09:13.718932 30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
I1101 00:09:13.723625 30593 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
I1101 00:09:13.723665 30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
I1101 00:09:13.723717 30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
I1101 00:09:13.729381 30593 command_runner.go:130] > 3ec20f2e
I1101 00:09:13.729472 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
I1101 00:09:13.739928 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 00:09:13.749888 30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.754135 30593 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.754186 30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.754224 30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 00:09:13.759372 30593 command_runner.go:130] > b5213941
I1101 00:09:13.759586 30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 00:09:13.770878 30593 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1101 00:09:13.774944 30593 command_runner.go:130] > ca.crt
I1101 00:09:13.774961 30593 command_runner.go:130] > ca.key
I1101 00:09:13.774966 30593 command_runner.go:130] > healthcheck-client.crt
I1101 00:09:13.774977 30593 command_runner.go:130] > healthcheck-client.key
I1101 00:09:13.774981 30593 command_runner.go:130] > peer.crt
I1101 00:09:13.774985 30593 command_runner.go:130] > peer.key
I1101 00:09:13.774988 30593 command_runner.go:130] > server.crt
I1101 00:09:13.774993 30593 command_runner.go:130] > server.key
I1101 00:09:13.775195 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1101 00:09:13.780693 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.781005 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1101 00:09:13.786438 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.786773 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1101 00:09:13.792247 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.792305 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1101 00:09:13.797510 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.797845 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1101 00:09:13.803206 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.803273 30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1101 00:09:13.808620 30593 command_runner.go:130] > Certificate will not expire
I1101 00:09:13.808816 30593 kubeadm.go:404] StartCluster: {Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kube
virt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1101 00:09:13.808974 30593 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 00:09:13.826906 30593 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 00:09:13.836480 30593 command_runner.go:130] > /var/lib/kubelet/config.yaml
I1101 00:09:13.836509 30593 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I1101 00:09:13.836518 30593 command_runner.go:130] > /var/lib/minikube/etcd:
I1101 00:09:13.836524 30593 command_runner.go:130] > member
I1101 00:09:13.836597 30593 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I1101 00:09:13.836612 30593 kubeadm.go:636] restartCluster start
I1101 00:09:13.836669 30593 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1101 00:09:13.845747 30593 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1101 00:09:13.846165 30593 kubeconfig.go:135] verify returned: extract IP: "multinode-391061" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:13.846289 30593 kubeconfig.go:146] "multinode-391061" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
I1101 00:09:13.846620 30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:13.847028 30593 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:13.847260 30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 00:09:13.847933 30593 cert_rotation.go:137] Starting client certificate rotation controller
I1101 00:09:13.848016 30593 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1101 00:09:13.857014 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:13.857066 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:13.868306 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:13.868326 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:13.868365 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:13.879425 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:14.380169 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:14.380271 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:14.393563 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:14.879961 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:14.880030 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:14.891500 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:15.380030 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:15.380116 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:15.394849 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:15.880377 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:15.880462 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:15.892276 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:16.379827 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:16.379933 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:16.391756 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:16.880389 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:16.880484 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:16.892186 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:17.379748 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:17.379838 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:17.391913 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:17.880537 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:17.880630 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:17.893349 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:18.379933 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:18.380022 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:18.391643 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:18.880268 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:18.880355 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:18.892132 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:19.379676 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:19.379760 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:19.391501 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:19.880377 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:19.880494 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:19.892270 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:20.379875 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:20.379968 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:20.391559 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:20.880250 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:20.880355 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:20.891729 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:21.380337 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:21.380407 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:21.391986 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:21.879571 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:21.879681 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:21.891291 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:22.379884 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:22.379978 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:22.391825 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:22.880476 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:22.880570 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:22.892224 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:23.379724 30593 api_server.go:166] Checking apiserver status ...
I1101 00:09:23.379835 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1101 00:09:23.391883 30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1101 00:09:23.857628 30593 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I1101 00:09:23.857661 30593 kubeadm.go:1128] stopping kube-system containers ...
I1101 00:09:23.857758 30593 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 00:09:23.879399 30593 command_runner.go:130] > c8ec107c7b83
I1101 00:09:23.879423 30593 command_runner.go:130] > 8a050fec9e56
I1101 00:09:23.879444 30593 command_runner.go:130] > 0922f8b627ba
I1101 00:09:23.879448 30593 command_runner.go:130] > 7e5dd13abba8
I1101 00:09:23.879453 30593 command_runner.go:130] > 717d368b8c2a
I1101 00:09:23.879456 30593 command_runner.go:130] > beeaf0ac020b
I1101 00:09:23.879460 30593 command_runner.go:130] > d52c65ebca75
I1101 00:09:23.879464 30593 command_runner.go:130] > 5c355a51915e
I1101 00:09:23.879467 30593 command_runner.go:130] > 6e72da581d8b
I1101 00:09:23.879471 30593 command_runner.go:130] > 37d9dd0022b9
I1101 00:09:23.879475 30593 command_runner.go:130] > c5ea3d84d06f
I1101 00:09:23.879479 30593 command_runner.go:130] > 32294fac02b3
I1101 00:09:23.879482 30593 command_runner.go:130] > a49a86a47d7c
I1101 00:09:23.879486 30593 command_runner.go:130] > 36d5f0bd5cf2
I1101 00:09:23.879494 30593 command_runner.go:130] > 92b70c8321ee
I1101 00:09:23.879498 30593 command_runner.go:130] > 9f5176fde232
I1101 00:09:23.879502 30593 command_runner.go:130] > f576715f1f47
I1101 00:09:23.879506 30593 command_runner.go:130] > 44a2cc98732a
I1101 00:09:23.879509 30593 command_runner.go:130] > 5a2e590156b6
I1101 00:09:23.879518 30593 command_runner.go:130] > feea3a57d77e
I1101 00:09:23.879525 30593 command_runner.go:130] > 7ad930b36263
I1101 00:09:23.879528 30593 command_runner.go:130] > b110676d9563
I1101 00:09:23.879533 30593 command_runner.go:130] > 8659d1168087
I1101 00:09:23.879540 30593 command_runner.go:130] > 7f78495183a7
I1101 00:09:23.879543 30593 command_runner.go:130] > 21b2a7338538
I1101 00:09:23.879547 30593 command_runner.go:130] > 2b739c443c07
I1101 00:09:23.879553 30593 command_runner.go:130] > f8c33525e5e4
I1101 00:09:23.879557 30593 command_runner.go:130] > b6d83949182f
I1101 00:09:23.879561 30593 command_runner.go:130] > 8dc7f1a0f0cf
I1101 00:09:23.879565 30593 command_runner.go:130] > d114ab0f9727
I1101 00:09:23.879569 30593 command_runner.go:130] > 88e660774880
I1101 00:09:23.880506 30593 docker.go:470] Stopping containers: [c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880]
I1101 00:09:23.880594 30593 ssh_runner.go:195] Run: docker stop c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880
I1101 00:09:23.906747 30593 command_runner.go:130] > c8ec107c7b83
I1101 00:09:23.906784 30593 command_runner.go:130] > 8a050fec9e56
I1101 00:09:23.906790 30593 command_runner.go:130] > 0922f8b627ba
I1101 00:09:23.906941 30593 command_runner.go:130] > 7e5dd13abba8
I1101 00:09:23.907074 30593 command_runner.go:130] > 717d368b8c2a
I1101 00:09:23.907086 30593 command_runner.go:130] > beeaf0ac020b
I1101 00:09:23.907092 30593 command_runner.go:130] > d52c65ebca75
I1101 00:09:23.907110 30593 command_runner.go:130] > 5c355a51915e
I1101 00:09:23.907116 30593 command_runner.go:130] > 6e72da581d8b
I1101 00:09:23.907123 30593 command_runner.go:130] > 37d9dd0022b9
I1101 00:09:23.907130 30593 command_runner.go:130] > c5ea3d84d06f
I1101 00:09:23.907139 30593 command_runner.go:130] > 32294fac02b3
I1101 00:09:23.907146 30593 command_runner.go:130] > a49a86a47d7c
I1101 00:09:23.907157 30593 command_runner.go:130] > 36d5f0bd5cf2
I1101 00:09:23.907168 30593 command_runner.go:130] > 92b70c8321ee
I1101 00:09:23.907176 30593 command_runner.go:130] > 9f5176fde232
I1101 00:09:23.907188 30593 command_runner.go:130] > f576715f1f47
I1101 00:09:23.907198 30593 command_runner.go:130] > 44a2cc98732a
I1101 00:09:23.907202 30593 command_runner.go:130] > 5a2e590156b6
I1101 00:09:23.907207 30593 command_runner.go:130] > feea3a57d77e
I1101 00:09:23.907213 30593 command_runner.go:130] > 7ad930b36263
I1101 00:09:23.907220 30593 command_runner.go:130] > b110676d9563
I1101 00:09:23.907227 30593 command_runner.go:130] > 8659d1168087
I1101 00:09:23.907238 30593 command_runner.go:130] > 7f78495183a7
I1101 00:09:23.907244 30593 command_runner.go:130] > 21b2a7338538
I1101 00:09:23.907254 30593 command_runner.go:130] > 2b739c443c07
I1101 00:09:23.907263 30593 command_runner.go:130] > f8c33525e5e4
I1101 00:09:23.907270 30593 command_runner.go:130] > b6d83949182f
I1101 00:09:23.907278 30593 command_runner.go:130] > 8dc7f1a0f0cf
I1101 00:09:23.907284 30593 command_runner.go:130] > d114ab0f9727
I1101 00:09:23.907288 30593 command_runner.go:130] > 88e660774880
I1101 00:09:23.908329 30593 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1101 00:09:23.924405 30593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 00:09:23.933413 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I1101 00:09:23.933460 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I1101 00:09:23.933474 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I1101 00:09:23.933508 30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 00:09:23.933573 30593 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 00:09:23.933632 30593 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 00:09:23.942681 30593 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1101 00:09:23.942716 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:24.061200 30593 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 00:09:24.061740 30593 command_runner.go:130] > [certs] Using existing ca certificate authority
I1101 00:09:24.062273 30593 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I1101 00:09:24.062864 30593 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1101 00:09:24.063543 30593 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I1101 00:09:24.064483 30593 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I1101 00:09:24.065146 30593 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I1101 00:09:24.065723 30593 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I1101 00:09:24.066240 30593 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I1101 00:09:24.066826 30593 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1101 00:09:24.067296 30593 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I1101 00:09:24.067896 30593 command_runner.go:130] > [certs] Using the existing "sa" key
I1101 00:09:24.069200 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:24.889031 30593 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 00:09:24.889057 30593 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 00:09:24.889063 30593 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 00:09:24.889069 30593 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 00:09:24.889075 30593 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 00:09:24.889099 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:25.068922 30593 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 00:09:25.068953 30593 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 00:09:25.068959 30593 command_runner.go:130] > [kubelet-start] Starting the kubelet
I1101 00:09:25.069343 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:25.134897 30593 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 00:09:25.134925 30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 00:09:25.141279 30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 00:09:25.148755 30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 00:09:25.153988 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:25.224920 30593 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 00:09:25.228266 30593 api_server.go:52] waiting for apiserver process to appear ...
I1101 00:09:25.228336 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:25.246286 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:25.761474 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:26.261798 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:26.761515 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:27.261570 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:27.761008 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:27.804720 30593 command_runner.go:130] > 1704
I1101 00:09:27.806000 30593 api_server.go:72] duration metric: took 2.577736282s to wait for apiserver process to appear ...
I1101 00:09:27.806022 30593 api_server.go:88] waiting for apiserver healthz status ...
I1101 00:09:27.806041 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:27.806649 30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
I1101 00:09:27.806703 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:27.807202 30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
I1101 00:09:28.307960 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:31.401471 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1101 00:09:31.401504 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1101 00:09:31.401515 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:31.478349 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1101 00:09:31.478386 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1101 00:09:31.807657 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:31.816386 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1101 00:09:31.816421 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1101 00:09:32.308084 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:32.313351 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1101 00:09:32.313393 30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1101 00:09:32.807687 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:32.814924 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
ok
I1101 00:09:32.815019 30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
I1101 00:09:32.815029 30593 round_trippers.go:469] Request Headers:
I1101 00:09:32.815039 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:32.815049 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:32.823839 30593 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I1101 00:09:32.823862 30593 round_trippers.go:577] Response Headers:
I1101 00:09:32.823873 30593 round_trippers.go:580] Audit-Id: 654a1cb8-a85b-41cb-aea3-21ea6bc79004
I1101 00:09:32.823885 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:32.823891 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:32.823898 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:32.823905 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:32.823913 30593 round_trippers.go:580] Content-Length: 264
I1101 00:09:32.823921 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:32 GMT
I1101 00:09:32.823947 30593 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
}
I1101 00:09:32.824032 30593 api_server.go:141] control plane version: v1.28.3
I1101 00:09:32.824050 30593 api_server.go:131] duration metric: took 5.018019595s to wait for apiserver health ...
I1101 00:09:32.824061 30593 cni.go:84] Creating CNI manager for ""
I1101 00:09:32.824070 30593 cni.go:136] 2 nodes found, recommending kindnet
I1101 00:09:32.826169 30593 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1101 00:09:32.827914 30593 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1101 00:09:32.841919 30593 command_runner.go:130] > File: /opt/cni/bin/portmap
I1101 00:09:32.841942 30593 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I1101 00:09:32.841948 30593 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I1101 00:09:32.841955 30593 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I1101 00:09:32.841960 30593 command_runner.go:130] > Access: 2023-11-01 00:09:01.939751521 +0000
I1101 00:09:32.841969 30593 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
I1101 00:09:32.841974 30593 command_runner.go:130] > Change: 2023-11-01 00:09:00.154751521 +0000
I1101 00:09:32.841979 30593 command_runner.go:130] > Birth: -
I1101 00:09:32.843041 30593 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
I1101 00:09:32.843061 30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I1101 00:09:32.868639 30593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1101 00:09:34.233741 30593 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I1101 00:09:34.264714 30593 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I1101 00:09:34.269029 30593 command_runner.go:130] > serviceaccount/kindnet unchanged
I1101 00:09:34.306476 30593 command_runner.go:130] > daemonset.apps/kindnet configured
I1101 00:09:34.313598 30593 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.44492846s)
I1101 00:09:34.313628 30593 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 00:09:34.313739 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:34.313753 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.313764 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.313774 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.328832 30593 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
I1101 00:09:34.328855 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.328863 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.328871 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.328944 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.328962 30593 round_trippers.go:580] Audit-Id: 9a80f099-79a4-48ce-bc32-9266f1c0dc9f
I1101 00:09:34.328971 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.328985 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.330618 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
I1101 00:09:34.334579 30593 system_pods.go:59] 12 kube-system pods found
I1101 00:09:34.334612 30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 00:09:34.334627 30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1101 00:09:34.334633 30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1101 00:09:34.334638 30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
I1101 00:09:34.334642 30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
I1101 00:09:34.334649 30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1101 00:09:34.334659 30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1101 00:09:34.334666 30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
I1101 00:09:34.334670 30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
I1101 00:09:34.334674 30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
I1101 00:09:34.334679 30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1101 00:09:34.334685 30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 00:09:34.334691 30593 system_pods.go:74] duration metric: took 21.056413ms to wait for pod list to return data ...
I1101 00:09:34.334704 30593 node_conditions.go:102] verifying NodePressure condition ...
I1101 00:09:34.334757 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
I1101 00:09:34.334764 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.334771 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.334777 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.340145 30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1101 00:09:34.340163 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.340169 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.340175 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.340180 30593 round_trippers.go:580] Audit-Id: 1531eb5d-604e-4c94-96b1-59616ac61bc1
I1101 00:09:34.340185 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.340189 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.340199 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.340500 30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9590 chars]
I1101 00:09:34.341106 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:34.341127 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:34.341135 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:34.341139 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:34.341143 30593 node_conditions.go:105] duration metric: took 6.435475ms to run NodePressure ...
I1101 00:09:34.341158 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1101 00:09:34.596643 30593 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I1101 00:09:34.664781 30593 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I1101 00:09:34.667106 30593 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I1101 00:09:34.667212 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
I1101 00:09:34.667221 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.667228 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.667234 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.673886 30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I1101 00:09:34.673905 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.673912 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.673918 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.673923 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.673936 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.673941 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.673946 30593 round_trippers.go:580] Audit-Id: 7dc67d14-eb2e-46d1-aa78-54d52af1af34
I1101 00:09:34.675336 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
I1101 00:09:34.676627 30593 kubeadm.go:787] kubelet initialised
I1101 00:09:34.676644 30593 kubeadm.go:788] duration metric: took 9.518378ms waiting for restarted kubelet to initialise ...
I1101 00:09:34.676651 30593 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:34.676705 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:34.676713 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.676720 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.676728 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.683293 30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I1101 00:09:34.683308 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.683315 30593 round_trippers.go:580] Audit-Id: b0192f99-985e-4aae-927b-c47d95fe8014
I1101 00:09:34.683321 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.683327 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.683332 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.683338 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.683350 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.685550 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
I1101 00:09:34.688329 30593 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.688397 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:34.688408 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.688416 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.688421 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.698455 30593 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I1101 00:09:34.699740 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.699755 30593 round_trippers.go:580] Audit-Id: eb7d9633-7fab-456d-a9f4-795f402a1e5a
I1101 00:09:34.699764 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.699774 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.699785 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.699794 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.699803 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.699985 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:34.700490 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.700507 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.700517 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.700526 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.713644 30593 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I1101 00:09:34.713666 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.713679 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.713686 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.713694 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.713702 30593 round_trippers.go:580] Audit-Id: ee2f8b85-6ebc-4ce5-b02d-f9b38983f319
I1101 00:09:34.713710 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.713722 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.713963 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.714314 30593 pod_ready.go:97] node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.714332 30593 pod_ready.go:81] duration metric: took 25.984465ms waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.714343 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.714355 30593 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.714451 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
I1101 00:09:34.714465 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.714476 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.714486 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.716800 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.716818 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.716827 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.716838 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.716846 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.716854 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.716866 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.716879 30593 round_trippers.go:580] Audit-Id: 0183d545-7a83-4bf3-bb19-280d54d90e72
I1101 00:09:34.717288 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
I1101 00:09:34.717688 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.717702 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.717708 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.717715 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.719608 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:34.719624 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.719632 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.719640 30593 round_trippers.go:580] Audit-Id: cc656017-62ca-46cc-93aa-6f56e0bacf57
I1101 00:09:34.719647 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.719655 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.719663 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.719673 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.719831 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.720155 30593 pod_ready.go:97] node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.720173 30593 pod_ready.go:81] duration metric: took 5.809883ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.720181 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.720222 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.720281 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:34.720291 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.720302 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.720316 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.727693 30593 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I1101 00:09:34.727724 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.727735 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.727746 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.727757 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.727768 30593 round_trippers.go:580] Audit-Id: f429dcbd-b1c6-47e9-b094-3b51b74fd598
I1101 00:09:34.727779 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.727790 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.727953 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:34.728461 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.728479 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.728490 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.728500 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.730599 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.730613 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.730619 30593 round_trippers.go:580] Audit-Id: 0de3f8aa-089c-4434-b8d3-d71e99713bfd
I1101 00:09:34.730624 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.730632 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.730644 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.730660 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.730670 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.730850 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.731213 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.731234 30593 pod_ready.go:81] duration metric: took 11.0013ms waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.731247 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.731266 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.731321 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
I1101 00:09:34.731332 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.731342 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.731350 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.735460 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:34.735475 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.735481 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.735488 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.735501 30593 round_trippers.go:580] Audit-Id: 2bd7494f-9968-4fd2-aca0-bb70496933d6
I1101 00:09:34.735518 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.735525 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.735540 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.735848 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1178","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1101 00:09:34.736287 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:34.736300 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.736307 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.736315 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.738460 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.738480 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.738490 30593 round_trippers.go:580] Audit-Id: b9555108-2183-46ca-b82f-b9cd6213e770
I1101 00:09:34.738511 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.738524 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.738532 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.738547 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.738555 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.738690 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:34.739057 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.739086 30593 pod_ready.go:81] duration metric: took 7.809638ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:34.739103 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:34.739113 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
I1101 00:09:34.914034 30593 request.go:629] Waited for 174.835524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
I1101 00:09:34.914109 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
I1101 00:09:34.914114 30593 round_trippers.go:469] Request Headers:
I1101 00:09:34.914121 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:34.914131 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:34.916919 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:34.916946 30593 round_trippers.go:577] Response Headers:
I1101 00:09:34.916955 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:34.916964 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:34.916972 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:34.916983 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:34.916990 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:34 GMT
I1101 00:09:34.917003 30593 round_trippers.go:580] Audit-Id: 7b74a314-8cec-4d22-9be3-8af74ba926c4
I1101 00:09:34.917222 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
I1101 00:09:35.113972 30593 request.go:629] Waited for 196.314968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:35.114094 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:35.114106 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.114117 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.114128 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.116700 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:35.116727 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.116736 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.116744 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.116752 30593 round_trippers.go:580] Audit-Id: 520e1602-a5d2-496e-9336-3d05ae9bf431
I1101 00:09:35.116760 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.116769 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.116778 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.116880 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:35.117203 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:35.117220 30593 pod_ready.go:81] duration metric: took 378.09771ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
E1101 00:09:35.117234 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:35.117249 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:35.314720 30593 request.go:629] Waited for 197.37685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:35.314784 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:35.314790 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.314797 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.314806 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.317474 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:35.317495 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.317502 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.317508 30593 round_trippers.go:580] Audit-Id: 9af5c93f-eeb8-4bf5-91cf-0004ad594526
I1101 00:09:35.317513 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.317526 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.317532 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.317537 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.317656 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
I1101 00:09:35.514541 30593 request.go:629] Waited for 196.422301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:35.514605 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:35.514610 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.514620 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.514626 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.516964 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:35.516981 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.516987 30593 round_trippers.go:580] Audit-Id: f60ca5be-eff7-45b6-b4ef-25a4244f2ac8
I1101 00:09:35.516992 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.516999 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.517007 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.517016 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.517024 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.517144 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
I1101 00:09:35.517386 30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:35.517399 30593 pod_ready.go:81] duration metric: took 400.144025ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:35.517407 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
I1101 00:09:35.713801 30593 request.go:629] Waited for 196.321571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:35.713897 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:35.713902 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.713912 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.713919 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.718570 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:35.718593 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.718599 30593 round_trippers.go:580] Audit-Id: a80b7d1f-2804-4453-9d76-e2f5feeecd8b
I1101 00:09:35.718604 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.718609 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.718614 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.718619 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.718624 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.719017 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1101 00:09:35.914812 30593 request.go:629] Waited for 195.361033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:35.914878 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:35.914884 30593 round_trippers.go:469] Request Headers:
I1101 00:09:35.914892 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:35.914905 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:35.918630 30593 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I1101 00:09:35.918651 30593 round_trippers.go:577] Response Headers:
I1101 00:09:35.918658 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:35.918669 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:35.918675 30593 round_trippers.go:580] Content-Length: 210
I1101 00:09:35.918680 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:35 GMT
I1101 00:09:35.918685 30593 round_trippers.go:580] Audit-Id: 8559bcdf-7ea2-4533-82a7-71b9489af62e
I1101 00:09:35.918693 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:35.918698 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:35.918716 30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
I1101 00:09:35.918899 30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:35.918915 30593 pod_ready.go:81] duration metric: took 401.503391ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
E1101 00:09:35.918928 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:35.918938 30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:36.114381 30593 request.go:629] Waited for 195.370649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:36.114441 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:36.114446 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.114453 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.114459 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.117280 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:36.117299 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.117305 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.117310 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.117316 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.117324 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.117332 30593 round_trippers.go:580] Audit-Id: 1a904aba-8eb8-4b24-84bc-bed0f6168940
I1101 00:09:36.117345 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.117488 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1101 00:09:36.314311 30593 request.go:629] Waited for 196.435913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.314416 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.314424 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.314432 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.314438 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.317156 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:36.317180 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.317187 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.317193 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.317198 30593 round_trippers.go:580] Audit-Id: 438f8f57-c6d3-4b09-82e1-c9c57e8542d5
I1101 00:09:36.317207 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.317226 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.317232 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.317370 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:36.317685 30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:36.317702 30593 pod_ready.go:81] duration metric: took 398.74998ms waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
E1101 00:09:36.317710 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
I1101 00:09:36.317717 30593 pod_ready.go:38] duration metric: took 1.641059341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:36.317736 30593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 00:09:36.328581 30593 command_runner.go:130] > -16
I1101 00:09:36.329017 30593 ops.go:34] apiserver oom_adj: -16
I1101 00:09:36.329031 30593 kubeadm.go:640] restartCluster took 22.492412523s
I1101 00:09:36.329039 30593 kubeadm.go:406] StartCluster complete in 22.520229717s
I1101 00:09:36.329066 30593 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:36.329145 30593 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:36.329734 30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 00:09:36.329976 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1101 00:09:36.330139 30593 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I1101 00:09:36.330259 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:09:36.332516 30593 out.go:177] * Enabled addons:
I1101 00:09:36.330334 30593 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17486-7251/kubeconfig
I1101 00:09:36.334140 30593 addons.go:502] enable addons completed in 4.002956ms: enabled=[]
I1101 00:09:36.332878 30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 00:09:36.334423 30593 round_trippers.go:463] GET https://192.168.39.43:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I1101 00:09:36.334436 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.334446 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.334454 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.337955 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:36.337986 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.337996 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.338004 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.338012 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.338027 30593 round_trippers.go:580] Content-Length: 292
I1101 00:09:36.338038 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.338050 30593 round_trippers.go:580] Audit-Id: 9324051b-7b18-4bb3-a5fe-00967444602f
I1101 00:09:36.338061 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.338088 30593 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a6ee33a-4e79-49d5-be0e-4e19b76eb2c6","resourceVersion":"1206","creationTimestamp":"2023-11-01T00:02:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I1101 00:09:36.338210 30593 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-391061" context rescaled to 1 replicas
I1101 00:09:36.338240 30593 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1101 00:09:36.340479 30593 out.go:177] * Verifying Kubernetes components...
I1101 00:09:36.342243 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 00:09:36.464070 30593 command_runner.go:130] > apiVersion: v1
I1101 00:09:36.464088 30593 command_runner.go:130] > data:
I1101 00:09:36.464092 30593 command_runner.go:130] > Corefile: |
I1101 00:09:36.464096 30593 command_runner.go:130] > .:53 {
I1101 00:09:36.464099 30593 command_runner.go:130] > log
I1101 00:09:36.464104 30593 command_runner.go:130] > errors
I1101 00:09:36.464108 30593 command_runner.go:130] > health {
I1101 00:09:36.464112 30593 command_runner.go:130] > lameduck 5s
I1101 00:09:36.464116 30593 command_runner.go:130] > }
I1101 00:09:36.464124 30593 command_runner.go:130] > ready
I1101 00:09:36.464129 30593 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I1101 00:09:36.464134 30593 command_runner.go:130] > pods insecure
I1101 00:09:36.464139 30593 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I1101 00:09:36.464143 30593 command_runner.go:130] > ttl 30
I1101 00:09:36.464147 30593 command_runner.go:130] > }
I1101 00:09:36.464151 30593 command_runner.go:130] > prometheus :9153
I1101 00:09:36.464154 30593 command_runner.go:130] > hosts {
I1101 00:09:36.464159 30593 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I1101 00:09:36.464163 30593 command_runner.go:130] > fallthrough
I1101 00:09:36.464167 30593 command_runner.go:130] > }
I1101 00:09:36.464175 30593 command_runner.go:130] > forward . /etc/resolv.conf {
I1101 00:09:36.464180 30593 command_runner.go:130] > max_concurrent 1000
I1101 00:09:36.464184 30593 command_runner.go:130] > }
I1101 00:09:36.464188 30593 command_runner.go:130] > cache 30
I1101 00:09:36.464193 30593 command_runner.go:130] > loop
I1101 00:09:36.464198 30593 command_runner.go:130] > reload
I1101 00:09:36.464202 30593 command_runner.go:130] > loadbalance
I1101 00:09:36.464217 30593 command_runner.go:130] > }
I1101 00:09:36.464224 30593 command_runner.go:130] > kind: ConfigMap
I1101 00:09:36.464228 30593 command_runner.go:130] > metadata:
I1101 00:09:36.464233 30593 command_runner.go:130] > creationTimestamp: "2023-11-01T00:02:20Z"
I1101 00:09:36.464237 30593 command_runner.go:130] > name: coredns
I1101 00:09:36.464242 30593 command_runner.go:130] > namespace: kube-system
I1101 00:09:36.464246 30593 command_runner.go:130] > resourceVersion: "404"
I1101 00:09:36.464251 30593 command_runner.go:130] > uid: 9916bcab-f9a6-4b1c-a0a4-a33e2e2f738c
I1101 00:09:36.466580 30593 node_ready.go:35] waiting up to 6m0s for node "multinode-391061" to be "Ready" ...
I1101 00:09:36.466667 30593 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1101 00:09:36.513888 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.513918 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.513926 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.513933 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.516967 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:36.516991 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.517002 30593 round_trippers.go:580] Audit-Id: 4d84eb47-da1a-4fd0-96d7-b23c142dcf7c
I1101 00:09:36.517010 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.517018 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.517030 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.517038 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.517064 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.517425 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:36.714232 30593 request.go:629] Waited for 196.4313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.714301 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:36.714308 30593 round_trippers.go:469] Request Headers:
I1101 00:09:36.714319 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:36.714329 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:36.716978 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:36.716999 30593 round_trippers.go:577] Response Headers:
I1101 00:09:36.717006 30593 round_trippers.go:580] Audit-Id: 043fbdbd-3263-4587-9070-be445407c188
I1101 00:09:36.717012 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:36.717017 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:36.717022 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:36.717027 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:36.717035 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:36 GMT
I1101 00:09:36.717202 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:37.218413 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:37.218434 30593 round_trippers.go:469] Request Headers:
I1101 00:09:37.218447 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:37.218453 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:37.222719 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:37.222748 30593 round_trippers.go:577] Response Headers:
I1101 00:09:37.222759 30593 round_trippers.go:580] Audit-Id: 917dad8e-af16-42b6-88ae-5dcab424bb1e
I1101 00:09:37.222768 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:37.222778 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:37.222790 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:37.222802 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:37.222813 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:37 GMT
I1101 00:09:37.223475 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:37.718082 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:37.718126 30593 round_trippers.go:469] Request Headers:
I1101 00:09:37.718135 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:37.718141 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:37.721049 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:37.721077 30593 round_trippers.go:577] Response Headers:
I1101 00:09:37.721088 30593 round_trippers.go:580] Audit-Id: 06dcc7c1-bdd2-4e9f-870d-80146268aafa
I1101 00:09:37.721101 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:37.721121 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:37.721130 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:37.721139 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:37.721148 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:37 GMT
I1101 00:09:37.721272 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:38.218868 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.218893 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.218903 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.218912 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.222059 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:38.222083 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.222105 30593 round_trippers.go:580] Audit-Id: ad14bc98-1add-4a13-8ab1-495ec6575c6e
I1101 00:09:38.222111 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.222116 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.222121 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.222126 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.222131 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.222638 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
I1101 00:09:38.718331 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.718356 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.718364 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.718370 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.721280 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.721307 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.721314 30593 round_trippers.go:580] Audit-Id: 32a342cc-ec48-43cc-b0f0-efe6838ba34f
I1101 00:09:38.721319 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.721324 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.721329 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.721334 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.721339 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.721695 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:38.722003 30593 node_ready.go:49] node "multinode-391061" has status "Ready":"True"
I1101 00:09:38.722018 30593 node_ready.go:38] duration metric: took 2.255410222s waiting for node "multinode-391061" to be "Ready" ...
I1101 00:09:38.722030 30593 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:38.722093 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:38.722102 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.722113 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.722121 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.726178 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:38.726200 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.726211 30593 round_trippers.go:580] Audit-Id: d4651bc2-6bb9-4745-9c25-8f2b530c877c
I1101 00:09:38.726220 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.726227 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.726236 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.726244 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.726253 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.727979 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84372 chars]
I1101 00:09:38.731666 30593 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
I1101 00:09:38.731777 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:38.731788 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.731797 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.731804 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.734353 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.734368 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.734375 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.734380 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.734386 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.734391 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.734396 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.734401 30593 round_trippers.go:580] Audit-Id: f0f6d35c-893f-4b34-bb39-154e16bedbe1
I1101 00:09:38.734672 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:38.735183 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.735200 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.735208 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.735214 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.737368 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.737382 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.737388 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.737393 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.737398 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.737405 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.737418 30593 round_trippers.go:580] Audit-Id: f978b19f-d984-48d1-b95c-0f850f106969
I1101 00:09:38.737423 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.737700 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:38.738062 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:38.738078 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.738086 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.738092 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.740363 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.740379 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.740385 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.740390 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.740395 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.740408 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.740418 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.740423 30593 round_trippers.go:580] Audit-Id: c33f3cc3-4753-4832-a887-2f2bce060625
I1101 00:09:38.740727 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:38.741200 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:38.741213 30593 round_trippers.go:469] Request Headers:
I1101 00:09:38.741220 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:38.741226 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:38.743369 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:38.743385 30593 round_trippers.go:577] Response Headers:
I1101 00:09:38.743392 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:38 GMT
I1101 00:09:38.743397 30593 round_trippers.go:580] Audit-Id: ccc0a48d-0d10-468a-a49f-71ad3ebd3363
I1101 00:09:38.743402 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:38.743407 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:38.743414 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:38.743419 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:38.743797 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:39.244680 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:39.244705 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.244713 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.244719 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.249913 30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1101 00:09:39.249935 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.249943 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.249948 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.249954 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.249959 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.249964 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.249971 30593 round_trippers.go:580] Audit-Id: 12d94c73-c75e-46e9-871a-9b74acd630d6
I1101 00:09:39.250237 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:39.250731 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:39.250745 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.250754 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.250760 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.253732 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:39.253752 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.253761 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.253770 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.253778 30593 round_trippers.go:580] Audit-Id: 2a48db27-174b-4246-a989-ca7f61b115f9
I1101 00:09:39.253787 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.253793 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.253798 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.254037 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:39.744690 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:39.744715 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.744724 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.744729 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.748026 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:39.748050 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.748060 30593 round_trippers.go:580] Audit-Id: d31dc218-4603-4f82-a559-2e3697ff06e2
I1101 00:09:39.748072 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.748080 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.748087 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.748098 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.748105 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.748732 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:39.749181 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:39.749196 30593 round_trippers.go:469] Request Headers:
I1101 00:09:39.749206 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:39.749215 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:39.751958 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:39.751980 30593 round_trippers.go:577] Response Headers:
I1101 00:09:39.751989 30593 round_trippers.go:580] Audit-Id: b460f490-de79-4762-b30a-6cdd07942ced
I1101 00:09:39.751997 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:39.752005 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:39.752015 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:39.752021 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:39.752029 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:39 GMT
I1101 00:09:39.752310 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:40.244413 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:40.244438 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.244446 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.244452 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.248489 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:40.248512 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.248521 30593 round_trippers.go:580] Audit-Id: ccff4954-c9ff-4a7f-9536-aa2b767dc311
I1101 00:09:40.248528 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.248533 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.248538 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.248544 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.248549 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.248729 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:40.249180 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:40.249194 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.249201 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.249209 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.252171 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:40.252188 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.252194 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.252199 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.252203 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.252208 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.252213 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.252218 30593 round_trippers.go:580] Audit-Id: ca95e9f6-880f-4555-aa29-16a66b7bf628
I1101 00:09:40.252484 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:40.745314 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:40.745341 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.745350 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.745357 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.747878 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:40.747895 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.747902 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.747910 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.747924 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.747932 30593 round_trippers.go:580] Audit-Id: b88089ad-e6cf-4b38-b7fb-da565b4e5c79
I1101 00:09:40.747940 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.747951 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.748125 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:40.748587 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:40.748601 30593 round_trippers.go:469] Request Headers:
I1101 00:09:40.748611 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:40.748617 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:40.750689 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:40.750703 30593 round_trippers.go:577] Response Headers:
I1101 00:09:40.750710 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:40 GMT
I1101 00:09:40.750721 30593 round_trippers.go:580] Audit-Id: 3a208361-9be9-4a15-8f86-f26ff624d9b3
I1101 00:09:40.750729 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:40.750736 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:40.750744 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:40.750755 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:40.750912 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:40.751208 30593 pod_ready.go:102] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"False"
I1101 00:09:41.244531 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:41.244555 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.244563 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.244569 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.247236 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:41.247254 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.247264 30593 round_trippers.go:580] Audit-Id: 0a7a1192-7352-4f99-a239-ebbd6ca40e85
I1101 00:09:41.247272 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.247279 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.247289 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.247298 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.247318 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.247449 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:41.247870 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:41.247882 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.247889 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.247894 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.250080 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:41.250098 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.250104 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.250109 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.250114 30593 round_trippers.go:580] Audit-Id: 629d69c5-3174-4a7d-aa0d-8f22f6d5b2f6
I1101 00:09:41.250130 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.250138 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.250146 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.250326 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:41.745038 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:41.745066 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.745074 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.745080 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.748544 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:41.748570 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.748581 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.748590 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.748598 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.748606 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.748625 30593 round_trippers.go:580] Audit-Id: b22bcb01-f5bf-4a1d-aad0-6c0ab2d577d4
I1101 00:09:41.748637 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.748855 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1101 00:09:41.749306 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:41.749318 30593 round_trippers.go:469] Request Headers:
I1101 00:09:41.749325 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:41.749331 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:41.755594 30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I1101 00:09:41.755639 30593 round_trippers.go:577] Response Headers:
I1101 00:09:41.755649 30593 round_trippers.go:580] Audit-Id: a64448a4-caec-4cfe-9700-2fbbc35230d2
I1101 00:09:41.755657 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:41.755665 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:41.755673 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:41.755680 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:41.755695 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:41 GMT
I1101 00:09:41.755860 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.244432 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
I1101 00:09:42.244456 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.244464 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.244470 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.247204 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.247227 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.247238 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.247247 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.247256 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.247267 30593 round_trippers.go:580] Audit-Id: 003f9883-5c30-40fd-aa1f-88b585473b07
I1101 00:09:42.247272 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.247278 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.247475 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
I1101 00:09:42.248064 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.248082 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.248093 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.248100 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.251135 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:42.251152 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.251158 30593 round_trippers.go:580] Audit-Id: 1d944e3b-2b90-4cb4-b54e-e4dc8e023493
I1101 00:09:42.251168 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.251172 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.251177 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.251182 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.251187 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.251385 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.251763 30593 pod_ready.go:92] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:42.251782 30593 pod_ready.go:81] duration metric: took 3.52008861s waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.251794 30593 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.251868 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
I1101 00:09:42.251880 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.251891 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.251901 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.253932 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.253950 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.253957 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.253962 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.253967 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.253975 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.253980 30593 round_trippers.go:580] Audit-Id: 8a73d4e8-1e4e-4883-908a-5c09ce62f8c3
I1101 00:09:42.253985 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.254150 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1227","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
I1101 00:09:42.254640 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.254655 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.254674 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.254685 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.256694 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:42.256708 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.256715 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.256723 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.256731 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.256740 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.256749 30593 round_trippers.go:580] Audit-Id: 4c1b620e-fff1-4494-89d2-83c513fc0fc0
I1101 00:09:42.256757 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.256951 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.257268 30593 pod_ready.go:92] pod "etcd-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:42.257283 30593 pod_ready.go:81] duration metric: took 5.477797ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.257306 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:42.257369 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:42.257379 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.257390 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.257399 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.259467 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.259483 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.259492 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.259499 30593 round_trippers.go:580] Audit-Id: 05d95e16-1d4e-4f81-a9d5-b2b141ff765d
I1101 00:09:42.259508 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.259517 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.259526 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.259535 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.259733 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:42.260255 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.260274 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.260281 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.260287 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.262250 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:42.262265 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.262275 30593 round_trippers.go:580] Audit-Id: ff748f0c-35a9-4061-b5ed-b0472309e27b
I1101 00:09:42.262282 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.262290 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.262298 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.262310 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.262318 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.262580 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:42.314176 30593 request.go:629] Waited for 51.260114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:42.314237 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:42.314242 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.314249 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.314256 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.317908 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:42.317937 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.317948 30593 round_trippers.go:580] Audit-Id: fa52f436-6e2b-418e-972d-6b4c1f1c0fcb
I1101 00:09:42.317957 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.317966 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.317971 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.317976 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.317984 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.318154 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:42.514148 30593 request.go:629] Waited for 195.42483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.514213 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:42.514221 30593 round_trippers.go:469] Request Headers:
I1101 00:09:42.514235 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:42.514291 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:42.516991 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:42.517017 30593 round_trippers.go:577] Response Headers:
I1101 00:09:42.517026 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:42.517035 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:42.517044 30593 round_trippers.go:580] Audit-Id: 71439942-ddcd-4159-8952-4d34c7b14582
I1101 00:09:42.517052 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:42.517059 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:42.517068 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:42.517221 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:43.018410 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:43.018439 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.018449 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.018459 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.021587 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:43.021609 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.021616 30593 round_trippers.go:580] Audit-Id: 7c4f42ca-82c7-4601-9dd3-7fa193eec32f
I1101 00:09:43.021621 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.021626 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.021631 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.021636 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.021642 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:43.021917 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:43.022342 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:43.022357 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.022368 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.022376 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.025247 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:43.025262 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.025268 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.025280 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.025289 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.025298 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.025310 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:42 GMT
I1101 00:09:43.025316 30593 round_trippers.go:580] Audit-Id: a4d1586f-de58-43b9-93f2-43b9726b8133
I1101 00:09:43.025864 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:43.518711 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:43.518737 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.518746 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.518752 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.521991 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:43.522017 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.522027 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.522036 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.522044 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:43.522058 30593 round_trippers.go:580] Audit-Id: ee145f23-1a35-4e40-acd4-1b329858fdfd
I1101 00:09:43.522065 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.522076 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.522321 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:43.522816 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:43.522832 30593 round_trippers.go:469] Request Headers:
I1101 00:09:43.522839 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:43.522845 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:43.525300 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:43.525321 30593 round_trippers.go:577] Response Headers:
I1101 00:09:43.525329 30593 round_trippers.go:580] Audit-Id: a16446ac-4c9e-462b-a604-37ce52442eb5
I1101 00:09:43.525336 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:43.525344 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:43.525351 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:43.525358 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:43.525365 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:43.525589 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:44.018504 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:44.018526 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.018534 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.018539 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.021345 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.021368 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.021379 30593 round_trippers.go:580] Audit-Id: 23afddaf-e391-4a40-9206-ba5a97021cd1
I1101 00:09:44.021389 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.021397 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.021402 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.021408 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.021413 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:44.021781 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:44.022178 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:44.022191 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.022201 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.022206 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.024358 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.024374 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.024380 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.024385 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.024390 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.024395 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:43 GMT
I1101 00:09:44.024400 30593 round_trippers.go:580] Audit-Id: 10d30ea6-f2a4-4468-b8d9-fe4d25cd5e9a
I1101 00:09:44.024404 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.024539 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:44.518209 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:44.518235 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.518243 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.518249 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.521184 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.521208 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.521218 30593 round_trippers.go:580] Audit-Id: fc8c6383-2699-422a-8176-ddcab44a9a9c
I1101 00:09:44.521238 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.521246 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.521255 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.521264 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.521273 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:44.521459 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:44.521894 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:44.521907 30593 round_trippers.go:469] Request Headers:
I1101 00:09:44.521914 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:44.521920 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:44.524063 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:44.524079 30593 round_trippers.go:577] Response Headers:
I1101 00:09:44.524085 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:44.524135 30593 round_trippers.go:580] Audit-Id: e14e26a5-28ca-4d3f-bae4-eea46c9e3a5b
I1101 00:09:44.524159 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:44.524167 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:44.524177 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:44.524182 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:44.524354 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:44.524642 30593 pod_ready.go:102] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"False"
I1101 00:09:45.017778 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:45.017807 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.017815 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.017822 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.021073 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:45.021103 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.021114 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.021124 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.021133 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.021142 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.021151 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:45.021160 30593 round_trippers.go:580] Audit-Id: 0dd2be34-8929-487b-8348-a144ffa6b941
I1101 00:09:45.021400 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1101 00:09:45.021872 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.021889 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.021897 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.021908 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.024844 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.024865 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.024874 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.024882 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.024889 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:44 GMT
I1101 00:09:45.024897 30593 round_trippers.go:580] Audit-Id: db32154e-ea80-4382-b7a1-53821506f75f
I1101 00:09:45.024905 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.024912 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.025668 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.518404 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
I1101 00:09:45.518429 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.518437 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.518442 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.521045 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.521065 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.521072 30593 round_trippers.go:580] Audit-Id: 32e5cb3c-6d81-4568-831d-7a0dc39dbca2
I1101 00:09:45.521077 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.521088 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.521093 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.521098 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.521103 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.521484 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1242","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
I1101 00:09:45.521900 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.521917 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.521924 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.521929 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.524067 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.524082 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.524088 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.524096 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.524104 30593 round_trippers.go:580] Audit-Id: 31736dc5-73c3-44fb-9ab2-5a9f73f0e730
I1101 00:09:45.524113 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.524121 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.524130 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.524429 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.524707 30593 pod_ready.go:92] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:45.524722 30593 pod_ready.go:81] duration metric: took 3.267408141s waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.524730 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.524780 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
I1101 00:09:45.524789 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.524796 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.524801 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.526609 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:45.526623 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.526629 30593 round_trippers.go:580] Audit-Id: c91e4f63-f1b9-4d99-b2a0-1ae44d4e3920
I1101 00:09:45.526634 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.526639 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.526644 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.526649 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.526654 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.526976 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1240","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
I1101 00:09:45.527354 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.527366 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.527373 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.527379 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.529038 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:45.529053 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.529064 30593 round_trippers.go:580] Audit-Id: 6d668043-98c8-4c98-9b23-07c7419995e3
I1101 00:09:45.529069 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.529074 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.529079 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.529084 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.529089 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.529310 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.529599 30593 pod_ready.go:92] pod "kube-controller-manager-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:45.529612 30593 pod_ready.go:81] duration metric: took 4.877104ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.529629 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.529698 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
I1101 00:09:45.529709 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.529717 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.529727 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.531667 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:45.531685 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.531694 30593 round_trippers.go:580] Audit-Id: 179e6548-b6dd-4972-8941-597dc0f20790
I1101 00:09:45.531703 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.531718 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.531724 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.531731 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.531737 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.532195 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
I1101 00:09:45.713849 30593 request.go:629] Waited for 181.057235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.713909 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:45.713914 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.713921 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.713927 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.716619 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.716637 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.716643 30593 round_trippers.go:580] Audit-Id: 426c242f-3496-4e53-8631-c1189b21932f
I1101 00:09:45.716649 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.716657 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.716665 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.716677 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.716689 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.716889 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:45.717308 30593 pod_ready.go:92] pod "kube-proxy-clsrp" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:45.717325 30593 pod_ready.go:81] duration metric: took 187.686843ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.717337 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:45.914796 30593 request.go:629] Waited for 197.399239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:45.914852 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
I1101 00:09:45.914857 30593 round_trippers.go:469] Request Headers:
I1101 00:09:45.914864 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:45.914871 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:45.917416 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:45.917445 30593 round_trippers.go:577] Response Headers:
I1101 00:09:45.917454 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:45 GMT
I1101 00:09:45.917462 30593 round_trippers.go:580] Audit-Id: 9cba40f3-3ad3-42a3-b93f-aa9cc6fc7dd3
I1101 00:09:45.917475 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:45.917480 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:45.917486 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:45.917492 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:45.917704 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
I1101 00:09:46.114598 30593 request.go:629] Waited for 196.375687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:46.114664 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
I1101 00:09:46.114691 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.114704 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.114710 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.117340 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:46.117362 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.117371 30593 round_trippers.go:580] Audit-Id: fc111c34-c570-4e3f-9832-d982a0432bc7
I1101 00:09:46.117379 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.117388 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.117396 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.117408 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.117421 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.117518 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
I1101 00:09:46.117775 30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:46.117792 30593 pod_ready.go:81] duration metric: took 400.44672ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
I1101 00:09:46.117804 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
I1101 00:09:46.314248 30593 request.go:629] Waited for 196.387545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:46.314341 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
I1101 00:09:46.314358 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.314369 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.314378 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.317400 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:46.317420 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.317429 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.317437 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.317445 30593 round_trippers.go:580] Audit-Id: feb64aac-545a-4487-be55-41e7c0e9ef0c
I1101 00:09:46.317454 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.317463 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.317473 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.317739 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1101 00:09:46.514556 30593 request.go:629] Waited for 196.355467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:46.514623 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
I1101 00:09:46.514630 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.514642 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.514652 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.517667 30593 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1101 00:09:46.517686 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.517695 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.517703 30593 round_trippers.go:580] Audit-Id: dee8bed2-39ff-4ddf-9b35-2afcacefb08c
I1101 00:09:46.517710 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.517717 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.517725 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.517732 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.517743 30593 round_trippers.go:580] Content-Length: 210
I1101 00:09:46.517769 30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
I1101 00:09:46.517879 30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:46.517896 30593 pod_ready.go:81] duration metric: took 400.083902ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
E1101 00:09:46.517909 30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
I1101 00:09:46.517918 30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:46.714359 30593 request.go:629] Waited for 196.368032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:46.714428 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:46.714439 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.714450 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.714460 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.717601 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:46.717622 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.717631 30593 round_trippers.go:580] Audit-Id: b10ec514-fb68-4eb7-a82b-478bb7b2615a
I1101 00:09:46.717638 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.717646 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.717653 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.717660 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.717669 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.718240 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1101 00:09:46.913939 30593 request.go:629] Waited for 195.310235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:46.913993 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:46.913998 30593 round_trippers.go:469] Request Headers:
I1101 00:09:46.914005 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:46.914018 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:46.916550 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:46.916574 30593 round_trippers.go:577] Response Headers:
I1101 00:09:46.916590 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:46.916598 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:46.916605 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:46.916613 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:46 GMT
I1101 00:09:46.916622 30593 round_trippers.go:580] Audit-Id: 3fdb3127-adb6-4b1b-973b-56d6f01c7510
I1101 00:09:46.916635 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:46.916797 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:47.114664 30593 request.go:629] Waited for 197.399091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:47.114755 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:47.114767 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.114785 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.114799 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.117780 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:47.117799 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.117806 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.117812 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.117817 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.117822 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.117827 30593 round_trippers.go:580] Audit-Id: 88a0065a-7184-46f2-bd0b-8a0b89e70b44
I1101 00:09:47.117841 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.118061 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1101 00:09:47.313739 30593 request.go:629] Waited for 195.316992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:47.313819 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:47.313832 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.313850 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.313863 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.317452 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:47.317480 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.317490 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.317498 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.317506 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.317514 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.317522 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.317530 30593 round_trippers.go:580] Audit-Id: 2e316d17-f6a0-43df-b21e-ef5ee4396440
I1101 00:09:47.317759 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:47.818890 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
I1101 00:09:47.818917 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.818925 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.818932 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.821524 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:47.821546 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.821558 30593 round_trippers.go:580] Audit-Id: 50ab8a02-fab8-41d2-abe4-e6fa324b51f1
I1101 00:09:47.821566 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.821574 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.821582 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.821590 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.821600 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.822014 30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1244","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
I1101 00:09:47.822399 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
I1101 00:09:47.822414 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.822432 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.822440 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.825524 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:47.825549 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.825559 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.825568 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.825576 30593 round_trippers.go:580] Audit-Id: cff53b13-6010-47a4-94a7-bfaa8a544728
I1101 00:09:47.825584 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.825592 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.825600 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.825781 30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
I1101 00:09:47.826104 30593 pod_ready.go:92] pod "kube-scheduler-multinode-391061" in "kube-system" namespace has status "Ready":"True"
I1101 00:09:47.826120 30593 pod_ready.go:81] duration metric: took 1.308189456s waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
I1101 00:09:47.826129 30593 pod_ready.go:38] duration metric: took 9.10408386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 00:09:47.826150 30593 api_server.go:52] waiting for apiserver process to appear ...
I1101 00:09:47.826195 30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 00:09:47.838151 30593 command_runner.go:130] > 1704
I1101 00:09:47.838274 30593 api_server.go:72] duration metric: took 11.499995093s to wait for apiserver process to appear ...
I1101 00:09:47.838293 30593 api_server.go:88] waiting for apiserver healthz status ...
I1101 00:09:47.838314 30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
I1101 00:09:47.844117 30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
ok
I1101 00:09:47.844194 30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
I1101 00:09:47.844207 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.844218 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.844226 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.845412 30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1101 00:09:47.845425 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.845431 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.845436 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.845442 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.845450 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.845463 30593 round_trippers.go:580] Content-Length: 264
I1101 00:09:47.845475 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.845485 30593 round_trippers.go:580] Audit-Id: 1468702f-2934-4914-b020-c0a4990038b1
I1101 00:09:47.845504 30593 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
}
I1101 00:09:47.845540 30593 api_server.go:141] control plane version: v1.28.3
I1101 00:09:47.845552 30593 api_server.go:131] duration metric: took 7.252944ms to wait for apiserver health ...
I1101 00:09:47.845562 30593 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 00:09:47.913821 30593 request.go:629] Waited for 68.174041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:47.913881 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:47.913885 30593 round_trippers.go:469] Request Headers:
I1101 00:09:47.913893 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:47.913899 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:47.918202 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:47.918230 30593 round_trippers.go:577] Response Headers:
I1101 00:09:47.918239 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:47.918248 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:47.918254 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:47 GMT
I1101 00:09:47.918259 30593 round_trippers.go:580] Audit-Id: b30ccebe-8256-4a7d-a462-7b4e1d0cdfa8
I1101 00:09:47.918264 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:47.918269 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:47.920031 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
I1101 00:09:47.922413 30593 system_pods.go:59] 12 kube-system pods found
I1101 00:09:47.922434 30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
I1101 00:09:47.922438 30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
I1101 00:09:47.922442 30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
I1101 00:09:47.922446 30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
I1101 00:09:47.922450 30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
I1101 00:09:47.922454 30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
I1101 00:09:47.922458 30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
I1101 00:09:47.922462 30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
I1101 00:09:47.922465 30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
I1101 00:09:47.922476 30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
I1101 00:09:47.922481 30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
I1101 00:09:47.922485 30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
I1101 00:09:47.922492 30593 system_pods.go:74] duration metric: took 76.924582ms to wait for pod list to return data ...
I1101 00:09:47.922513 30593 default_sa.go:34] waiting for default service account to be created ...
I1101 00:09:48.113860 30593 request.go:629] Waited for 191.269729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
I1101 00:09:48.113931 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
I1101 00:09:48.113936 30593 round_trippers.go:469] Request Headers:
I1101 00:09:48.113943 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:48.113949 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:48.117152 30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1101 00:09:48.117173 30593 round_trippers.go:577] Response Headers:
I1101 00:09:48.117179 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:48.117184 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:48.117189 30593 round_trippers.go:580] Content-Length: 262
I1101 00:09:48.117194 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:48 GMT
I1101 00:09:48.117199 30593 round_trippers.go:580] Audit-Id: cf19f0f1-599a-4c01-a817-75c7ba89021a
I1101 00:09:48.117204 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:48.117209 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:48.117226 30593 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"331ecfcc-8852-4250-85c2-da77e5b314fe","resourceVersion":"364","creationTimestamp":"2023-11-01T00:02:33Z"}}]}
I1101 00:09:48.117391 30593 default_sa.go:45] found service account: "default"
I1101 00:09:48.117408 30593 default_sa.go:55] duration metric: took 194.889894ms for default service account to be created ...
I1101 00:09:48.117415 30593 system_pods.go:116] waiting for k8s-apps to be running ...
I1101 00:09:48.313818 30593 request.go:629] Waited for 196.325558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:48.313881 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
I1101 00:09:48.313886 30593 round_trippers.go:469] Request Headers:
I1101 00:09:48.313893 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:48.313899 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:48.317985 30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1101 00:09:48.318004 30593 round_trippers.go:577] Response Headers:
I1101 00:09:48.318011 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:48.318018 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:48 GMT
I1101 00:09:48.318027 30593 round_trippers.go:580] Audit-Id: 7b682312-a373-4aac-a928-19f0e9f08ce4
I1101 00:09:48.318035 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:48.318042 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:48.318051 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:48.319258 30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
I1101 00:09:48.321698 30593 system_pods.go:86] 12 kube-system pods found
I1101 00:09:48.321724 30593 system_pods.go:89] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
I1101 00:09:48.321729 30593 system_pods.go:89] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
I1101 00:09:48.321733 30593 system_pods.go:89] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
I1101 00:09:48.321739 30593 system_pods.go:89] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
I1101 00:09:48.321743 30593 system_pods.go:89] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
I1101 00:09:48.321747 30593 system_pods.go:89] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
I1101 00:09:48.321752 30593 system_pods.go:89] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
I1101 00:09:48.321756 30593 system_pods.go:89] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
I1101 00:09:48.321762 30593 system_pods.go:89] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
I1101 00:09:48.321765 30593 system_pods.go:89] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
I1101 00:09:48.321772 30593 system_pods.go:89] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
I1101 00:09:48.321777 30593 system_pods.go:89] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
I1101 00:09:48.321785 30593 system_pods.go:126] duration metric: took 204.365858ms to wait for k8s-apps to be running ...
I1101 00:09:48.321794 30593 system_svc.go:44] waiting for kubelet service to be running ....
I1101 00:09:48.321835 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 00:09:48.334581 30593 system_svc.go:56] duration metric: took 12.775415ms WaitForService to wait for kubelet.
I1101 00:09:48.334608 30593 kubeadm.go:581] duration metric: took 11.996332779s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1101 00:09:48.334634 30593 node_conditions.go:102] verifying NodePressure condition ...
I1101 00:09:48.514065 30593 request.go:629] Waited for 179.367734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
I1101 00:09:48.514131 30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
I1101 00:09:48.514136 30593 round_trippers.go:469] Request Headers:
I1101 00:09:48.514144 30593 round_trippers.go:473] Accept: application/json, */*
I1101 00:09:48.514150 30593 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1101 00:09:48.517017 30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1101 00:09:48.517036 30593 round_trippers.go:577] Response Headers:
I1101 00:09:48.517043 30593 round_trippers.go:580] Audit-Id: acbda546-1395-4e94-a808-39a73ef2e8e6
I1101 00:09:48.517057 30593 round_trippers.go:580] Cache-Control: no-cache, private
I1101 00:09:48.517063 30593 round_trippers.go:580] Content-Type: application/json
I1101 00:09:48.517070 30593 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
I1101 00:09:48.517077 30593 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
I1101 00:09:48.517087 30593 round_trippers.go:580] Date: Wed, 01 Nov 2023 00:09:48 GMT
I1101 00:09:48.517358 30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9463 chars]
I1101 00:09:48.517853 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:48.517873 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:48.517883 30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 00:09:48.517888 30593 node_conditions.go:123] node cpu capacity is 2
I1101 00:09:48.517892 30593 node_conditions.go:105] duration metric: took 183.255117ms to run NodePressure ...
I1101 00:09:48.517902 30593 start.go:228] waiting for startup goroutines ...
I1101 00:09:48.517913 30593 start.go:233] waiting for cluster config update ...
I1101 00:09:48.517918 30593 start.go:242] writing updated cluster config ...
I1101 00:09:48.518328 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:09:48.518400 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:09:48.521532 30593 out.go:177] * Starting worker node multinode-391061-m02 in cluster multinode-391061
I1101 00:09:48.522898 30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I1101 00:09:48.522933 30593 cache.go:56] Caching tarball of preloaded images
I1101 00:09:48.523028 30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1101 00:09:48.523039 30593 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker
I1101 00:09:48.523130 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:09:48.523306 30593 start.go:365] acquiring machines lock for multinode-391061-m02: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 00:09:48.523347 30593 start.go:369] acquired machines lock for "multinode-391061-m02" in 23.277µs
I1101 00:09:48.523360 30593 start.go:96] Skipping create...Using existing machine configuration
I1101 00:09:48.523365 30593 fix.go:54] fixHost starting: m02
I1101 00:09:48.523626 30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 00:09:48.523657 30593 main.go:141] libmachine: Launching plugin server for driver kvm2
I1101 00:09:48.538023 30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
I1101 00:09:48.538553 30593 main.go:141] libmachine: () Calling .GetVersion
I1101 00:09:48.539008 30593 main.go:141] libmachine: Using API Version 1
I1101 00:09:48.539038 30593 main.go:141] libmachine: () Calling .SetConfigRaw
I1101 00:09:48.539380 30593 main.go:141] libmachine: () Calling .GetMachineName
I1101 00:09:48.539558 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:09:48.539763 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetState
I1101 00:09:48.541362 30593 fix.go:102] recreateIfNeeded on multinode-391061-m02: state=Stopped err=<nil>
I1101 00:09:48.541381 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
W1101 00:09:48.541559 30593 fix.go:128] unexpected machine state, will restart: <nil>
I1101 00:09:48.543776 30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061-m02" ...
I1101 00:09:48.545357 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .Start
I1101 00:09:48.545519 30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring networks are active...
I1101 00:09:48.546142 30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network default is active
I1101 00:09:48.546521 30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network mk-multinode-391061 is active
I1101 00:09:48.546910 30593 main.go:141] libmachine: (multinode-391061-m02) Getting domain xml...
I1101 00:09:48.547503 30593 main.go:141] libmachine: (multinode-391061-m02) Creating domain...
I1101 00:09:49.771823 30593 main.go:141] libmachine: (multinode-391061-m02) Waiting to get IP...
I1101 00:09:49.772640 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:49.773071 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:49.773175 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:49.773074 30847 retry.go:31] will retry after 274.263244ms: waiting for machine to come up
I1101 00:09:50.048692 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:50.049124 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:50.049162 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.049076 30847 retry.go:31] will retry after 372.692246ms: waiting for machine to come up
I1101 00:09:50.423723 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:50.424163 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:50.424198 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.424109 30847 retry.go:31] will retry after 328.806363ms: waiting for machine to come up
I1101 00:09:50.754813 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:50.755280 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:50.755299 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.755254 30847 retry.go:31] will retry after 486.547371ms: waiting for machine to come up
I1101 00:09:51.243022 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:51.243428 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:51.243451 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.243379 30847 retry.go:31] will retry after 524.248371ms: waiting for machine to come up
I1101 00:09:51.769198 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:51.769648 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:51.769689 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.769606 30847 retry.go:31] will retry after 931.47967ms: waiting for machine to come up
I1101 00:09:52.703177 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:52.703627 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:52.703656 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:52.703550 30847 retry.go:31] will retry after 962.96473ms: waiting for machine to come up
I1101 00:09:53.668096 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:53.668562 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:53.668584 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:53.668516 30847 retry.go:31] will retry after 926.464487ms: waiting for machine to come up
I1101 00:09:54.596589 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:54.596929 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:54.596953 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:54.596883 30847 retry.go:31] will retry after 1.199020855s: waiting for machine to come up
I1101 00:09:55.797189 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:55.797717 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:55.797748 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:55.797665 30847 retry.go:31] will retry after 1.98043569s: waiting for machine to come up
I1101 00:09:57.780876 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:09:57.781471 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:09:57.781502 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:57.781409 30847 retry.go:31] will retry after 2.601288069s: waiting for machine to come up
I1101 00:10:00.385745 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:00.386332 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:10:00.386369 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:00.386242 30847 retry.go:31] will retry after 2.239008923s: waiting for machine to come up
I1101 00:10:02.627577 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:02.627955 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
I1101 00:10:02.627983 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:02.627920 30847 retry.go:31] will retry after 3.415765053s: waiting for machine to come up
I1101 00:10:06.046739 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.047249 30593 main.go:141] libmachine: (multinode-391061-m02) Found IP for machine: 192.168.39.249
I1101 00:10:06.047290 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has current primary IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.047305 30593 main.go:141] libmachine: (multinode-391061-m02) Reserving static IP address...
I1101 00:10:06.047763 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.047790 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"}
I1101 00:10:06.047800 30593 main.go:141] libmachine: (multinode-391061-m02) Reserved static IP address: 192.168.39.249
I1101 00:10:06.047814 30593 main.go:141] libmachine: (multinode-391061-m02) Waiting for SSH to be available...
I1101 00:10:06.047824 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Getting to WaitForSSH function...
I1101 00:10:06.049673 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.050046 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.050081 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.050222 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH client type: external
I1101 00:10:06.050261 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa (-rw-------)
I1101 00:10:06.050300 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I1101 00:10:06.050322 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | About to run SSH command:
I1101 00:10:06.050339 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | exit 0
I1101 00:10:06.146337 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | SSH cmd err, output: <nil>:
I1101 00:10:06.146696 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetConfigRaw
I1101 00:10:06.147450 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
I1101 00:10:06.149870 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.150236 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.150267 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.150541 30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
I1101 00:10:06.150763 30593 machine.go:88] provisioning docker machine ...
I1101 00:10:06.150786 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:06.150984 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
I1101 00:10:06.151140 30593 buildroot.go:166] provisioning hostname "multinode-391061-m02"
I1101 00:10:06.151161 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
I1101 00:10:06.151315 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.153372 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.153742 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.153790 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.153926 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.154158 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.154347 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.154535 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.154739 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.155162 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.155179 30593 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-391061-m02 && echo "multinode-391061-m02" | sudo tee /etc/hostname
I1101 00:10:06.302682 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061-m02
I1101 00:10:06.302715 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.305443 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.305857 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.305883 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.306094 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.306306 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.306521 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.306659 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.306805 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.307269 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.307298 30593 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-391061-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-391061-m02' | sudo tee -a /etc/hosts;
fi
fi
I1101 00:10:06.448087 30593 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1101 00:10:06.448122 30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
I1101 00:10:06.448143 30593 buildroot.go:174] setting up certificates
I1101 00:10:06.448153 30593 provision.go:83] configureAuth start
I1101 00:10:06.448163 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
I1101 00:10:06.448466 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
I1101 00:10:06.451196 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.451596 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.451627 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.451812 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.453965 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.454286 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.454315 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.454535 30593 provision.go:138] copyHostCerts
I1101 00:10:06.454570 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:10:06.454601 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
I1101 00:10:06.454610 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
I1101 00:10:06.454674 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
I1101 00:10:06.454748 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:10:06.454767 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
I1101 00:10:06.454773 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
I1101 00:10:06.454796 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
I1101 00:10:06.454836 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:10:06.454852 30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
I1101 00:10:06.454858 30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
I1101 00:10:06.454876 30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
I1101 00:10:06.454920 30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061-m02 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube multinode-391061-m02]
I1101 00:10:06.568585 30593 provision.go:172] copyRemoteCerts
I1101 00:10:06.568638 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 00:10:06.568659 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.571150 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.571450 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.571479 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.571687 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.571874 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.572047 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.572186 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:06.667838 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1101 00:10:06.667924 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 00:10:06.689930 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
I1101 00:10:06.689995 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1101 00:10:06.712213 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1101 00:10:06.712292 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 00:10:06.733879 30593 provision.go:86] duration metric: configureAuth took 285.714663ms
I1101 00:10:06.733904 30593 buildroot.go:189] setting minikube options for container-runtime
I1101 00:10:06.734094 30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1101 00:10:06.734113 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:06.734377 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.736917 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.737314 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.737348 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.737503 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.737692 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.737870 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.738014 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.738189 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.738528 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.738541 30593 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 00:10:06.871826 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1101 00:10:06.871854 30593 buildroot.go:70] root file system type: tmpfs
I1101 00:10:06.872006 30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 00:10:06.872036 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:06.874568 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.874916 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:06.874940 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:06.875118 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:06.875315 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.875468 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:06.875569 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:06.875698 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:06.876002 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:06.876075 30593 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.43"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 00:10:07.020165 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.43
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 00:10:07.020194 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:07.022769 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.023132 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:07.023159 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.023341 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:07.023522 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:07.023707 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:07.023843 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:07.023996 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:07.024324 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:07.024341 30593 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 00:10:07.865650 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1101 00:10:07.865678 30593 machine.go:91] provisioned docker machine in 1.714900545s
I1101 00:10:07.865693 30593 start.go:300] post-start starting for "multinode-391061-m02" (driver="kvm2")
I1101 00:10:07.865707 30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 00:10:07.865730 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:07.866051 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 00:10:07.866082 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:07.868728 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.869111 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:07.869135 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:07.869295 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:07.869516 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:07.869672 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:07.869814 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:07.964822 30593 ssh_runner.go:195] Run: cat /etc/os-release
I1101 00:10:07.968645 30593 command_runner.go:130] > NAME=Buildroot
I1101 00:10:07.968665 30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
I1101 00:10:07.968672 30593 command_runner.go:130] > ID=buildroot
I1101 00:10:07.968681 30593 command_runner.go:130] > VERSION_ID=2021.02.12
I1101 00:10:07.968687 30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1101 00:10:07.968778 30593 info.go:137] Remote host: Buildroot 2021.02.12
I1101 00:10:07.968802 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
I1101 00:10:07.968861 30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
I1101 00:10:07.968928 30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
I1101 00:10:07.968937 30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
I1101 00:10:07.969013 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 00:10:07.978134 30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
I1101 00:10:07.999912 30593 start.go:303] post-start completed in 134.20357ms
I1101 00:10:07.999936 30593 fix.go:56] fixHost completed within 19.476570148s
I1101 00:10:07.999956 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:08.002715 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.003077 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.003109 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.003255 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:08.003478 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.003658 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.003796 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:08.003977 30593 main.go:141] libmachine: Using SSH client type: native
I1101 00:10:08.004287 30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1101 00:10:08.004297 30593 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1101 00:10:08.139625 30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797408.091239350
I1101 00:10:08.139661 30593 fix.go:206] guest clock: 1698797408.091239350
I1101 00:10:08.139672 30593 fix.go:219] Guest: 2023-11-01 00:10:08.09123935 +0000 UTC Remote: 2023-11-01 00:10:07.999939094 +0000 UTC m=+78.350442936 (delta=91.300256ms)
I1101 00:10:08.139692 30593 fix.go:190] guest clock delta is within tolerance: 91.300256ms
I1101 00:10:08.139699 30593 start.go:83] releasing machines lock for "multinode-391061-m02", held for 19.616342127s
I1101 00:10:08.139723 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.140075 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
I1101 00:10:08.142846 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.143203 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.143246 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.145734 30593 out.go:177] * Found network options:
I1101 00:10:08.147426 30593 out.go:177] - NO_PROXY=192.168.39.43
W1101 00:10:08.148945 30593 proxy.go:119] fail to check proxy env: Error ip not in block
I1101 00:10:08.148990 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.149744 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.149992 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
I1101 00:10:08.150087 30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 00:10:08.150122 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
W1101 00:10:08.150204 30593 proxy.go:119] fail to check proxy env: Error ip not in block
I1101 00:10:08.150272 30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1101 00:10:08.150293 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
I1101 00:10:08.153130 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153377 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153609 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.153633 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153818 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
I1101 00:10:08.153840 30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
I1101 00:10:08.153853 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:08.154005 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
I1101 00:10:08.154068 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.154141 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
I1101 00:10:08.154205 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:08.154260 30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
I1101 00:10:08.154322 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:08.154355 30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
I1101 00:10:08.266696 30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1101 00:10:08.266764 30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1101 00:10:08.266798 30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 00:10:08.266854 30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 00:10:08.282630 30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1101 00:10:08.282695 30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1101 00:10:08.282708 30593 start.go:472] detecting cgroup driver to use...
I1101 00:10:08.282848 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:10:08.299593 30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1101 00:10:08.299879 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1101 00:10:08.309962 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1101 00:10:08.319802 30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1101 00:10:08.319855 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1101 00:10:08.329984 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:10:08.340324 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1101 00:10:08.350388 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1101 00:10:08.360362 30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 00:10:08.370630 30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1101 00:10:08.380841 30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 00:10:08.389848 30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1101 00:10:08.389933 30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 00:10:08.398827 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:10:08.509909 30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 00:10:08.527202 30593 start.go:472] detecting cgroup driver to use...
I1101 00:10:08.527267 30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 00:10:08.539911 30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1101 00:10:08.540831 30593 command_runner.go:130] > [Unit]
I1101 00:10:08.540847 30593 command_runner.go:130] > Description=Docker Application Container Engine
I1101 00:10:08.540853 30593 command_runner.go:130] > Documentation=https://docs.docker.com
I1101 00:10:08.540859 30593 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1101 00:10:08.540864 30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1101 00:10:08.540873 30593 command_runner.go:130] > StartLimitBurst=3
I1101 00:10:08.540880 30593 command_runner.go:130] > StartLimitIntervalSec=60
I1101 00:10:08.540884 30593 command_runner.go:130] > [Service]
I1101 00:10:08.540890 30593 command_runner.go:130] > Type=notify
I1101 00:10:08.540899 30593 command_runner.go:130] > Restart=on-failure
I1101 00:10:08.540906 30593 command_runner.go:130] > Environment=NO_PROXY=192.168.39.43
I1101 00:10:08.540915 30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1101 00:10:08.540932 30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1101 00:10:08.540943 30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1101 00:10:08.540952 30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1101 00:10:08.540961 30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1101 00:10:08.540970 30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1101 00:10:08.540980 30593 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1101 00:10:08.540993 30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1101 00:10:08.541002 30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1101 00:10:08.541009 30593 command_runner.go:130] > ExecStart=
I1101 00:10:08.541024 30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1101 00:10:08.541035 30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1101 00:10:08.541042 30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1101 00:10:08.541051 30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1101 00:10:08.541057 30593 command_runner.go:130] > LimitNOFILE=infinity
I1101 00:10:08.541062 30593 command_runner.go:130] > LimitNPROC=infinity
I1101 00:10:08.541066 30593 command_runner.go:130] > LimitCORE=infinity
I1101 00:10:08.541073 30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1101 00:10:08.541080 30593 command_runner.go:130] > # Only systemd 226 and above support this version.
I1101 00:10:08.541087 30593 command_runner.go:130] > TasksMax=infinity
I1101 00:10:08.541091 30593 command_runner.go:130] > TimeoutStartSec=0
I1101 00:10:08.541100 30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1101 00:10:08.541106 30593 command_runner.go:130] > Delegate=yes
I1101 00:10:08.541112 30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1101 00:10:08.541122 30593 command_runner.go:130] > KillMode=process
I1101 00:10:08.541128 30593 command_runner.go:130] > [Install]
I1101 00:10:08.541133 30593 command_runner.go:130] > WantedBy=multi-user.target
I1101 00:10:08.541558 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:10:08.556173 30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 00:10:08.575016 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 00:10:08.587990 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:10:08.601691 30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 00:10:08.631342 30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 00:10:08.644194 30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 00:10:08.661548 30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1101 00:10:08.662099 30593 ssh_runner.go:195] Run: which cri-dockerd
I1101 00:10:08.665592 30593 command_runner.go:130] > /usr/bin/cri-dockerd
I1101 00:10:08.665782 30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1101 00:10:08.674228 30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1101 00:10:08.690202 30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 00:10:08.793665 30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 00:10:08.913029 30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
I1101 00:10:08.913074 30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1101 00:10:08.928591 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:10:09.029624 30593 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 00:10:10.439233 30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.409560046s)
I1101 00:10:10.439309 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:10:10.540266 30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1101 00:10:10.657292 30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 00:10:10.768655 30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 00:10:10.871570 30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1101 00:10:10.887421 30593 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I1101 00:10:10.889772 30593 out.go:177]
W1101 00:10:10.891480 30593 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W1101 00:10:10.891500 30593 out.go:239] *
W1101 00:10:10.892409 30593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 00:10:10.894220 30593 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Wed 2023-11-01 00:09:00 UTC, ends at Wed 2023-11-01 00:10:11 UTC. --
Nov 01 00:09:34 multinode-391061 dockerd[846]: time="2023-11-01T00:09:34.354911439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 00:09:34 multinode-391061 dockerd[846]: time="2023-11-01T00:09:34.354921195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:36 multinode-391061 cri-dockerd[1070]: time="2023-11-01T00:09:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fde88b0de04da9bbd831a6d4c66ca23079816d358c2a073c1c844f3c823b3a46/resolv.conf as [nameserver 192.168.122.1]"
Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.610425616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.610678286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.610882319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.611022314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.374690136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.375186054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.375212070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.375225621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385513235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385648741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385726478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385835459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:40 multinode-391061 cri-dockerd[1070]: time="2023-11-01T00:09:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1607f59d6ba061ddfaed58cd098e43eb0a9636f0a88d126db9b8190b719c5a2c/resolv.conf as [nameserver 192.168.122.1]"
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.935555299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.938870881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.939075579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.939369832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:41 multinode-391061 cri-dockerd[1070]: time="2023-11-01T00:09:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd4ac2bcf1f1a7e97a662352c7ff24fed55ebabd9072e6380c598ee47a8bd587/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205597128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205714151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205824051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205840774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7977c47b23fe0 8c811b4aec35f 30 seconds ago Running busybox 2 dd4ac2bcf1f1a busybox-5bc68d56bd-gm6t7
c9a40438d8228 ead0a4a53df89 31 seconds ago Running coredns 2 1607f59d6ba06 coredns-5dd5756b68-dg5w7
5c271018fdbe1 c7d1297425461 35 seconds ago Running kindnet-cni 2 fde88b0de04da kindnet-4jfj9
bf95dea74238d 6e38f40d628db 37 seconds ago Running storage-provisioner 3 40ae286f2e451 storage-provisioner
a5893c8acc578 bfc896cf80fba 38 seconds ago Running kube-proxy 2 d00d0faf2517f kube-proxy-clsrp
57698df880604 6d1b4fd1b182d 43 seconds ago Running kube-scheduler 2 08911deed6912 kube-scheduler-multinode-391061
16f5037339398 73deb9a3f7025 44 seconds ago Running etcd 2 df5b53c7fbd9f etcd-multinode-391061
c2c9b3f6a6e3c 10baa1ca17068 44 seconds ago Running kube-controller-manager 2 686def3a5433e kube-controller-manager-multinode-391061
ad9ce8cffbbd9 5374347291230 44 seconds ago Running kube-apiserver 2 058229c68e582 kube-apiserver-multinode-391061
c8ec107c7b838 6e38f40d628db 3 minutes ago Exited storage-provisioner 2 6e72da581d8b3 storage-provisioner
8c3065faff023 8c811b4aec35f 3 minutes ago Exited busybox 1 02ff0963ebcb2 busybox-5bc68d56bd-gm6t7
8a050fec9e562 ead0a4a53df89 3 minutes ago Exited coredns 1 0922f8b627ba5 coredns-5dd5756b68-dg5w7
7e5dd13abba8f c7d1297425461 3 minutes ago Exited kindnet-cni 1 d52c65ebca758 kindnet-4jfj9
beeaf0ac020b3 bfc896cf80fba 3 minutes ago Exited kube-proxy 1 5c355a51915ed kube-proxy-clsrp
37d9dd0022b92 73deb9a3f7025 3 minutes ago Exited etcd 1 92b70c8321ee1 etcd-multinode-391061
c5ea3d84d06ff 6d1b4fd1b182d 3 minutes ago Exited kube-scheduler 1 9f5176fde232a kube-scheduler-multinode-391061
32294fac02b31 10baa1ca17068 3 minutes ago Exited kube-controller-manager 1 f576715f1f474 kube-controller-manager-multinode-391061
a49a86a47d7cc 5374347291230 3 minutes ago Exited kube-apiserver 1 36d5f0bd5cf2b kube-apiserver-multinode-391061
*
* ==> coredns [8a050fec9e56] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:60917 - 58012 "HINFO IN 5379909798549472737.3172976332792896323. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021213453s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [c9a40438d822] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:53059 - 12343 "HINFO IN 8994418390587536084.7952953180045631116. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045076997s
*
* ==> describe nodes <==
* Name: multinode-391061
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-391061
kubernetes.io/os=linux
minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
minikube.k8s.io/name=multinode-391061
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_11_01T00_02_22_0700
minikube.k8s.io/version=v1.32.0-beta.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 01 Nov 2023 00:02:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-391061
AcquireTime: <unset>
RenewTime: Wed, 01 Nov 2023 00:10:02 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 01 Nov 2023 00:09:38 +0000 Wed, 01 Nov 2023 00:02:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 01 Nov 2023 00:09:38 +0000 Wed, 01 Nov 2023 00:02:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 01 Nov 2023 00:09:38 +0000 Wed, 01 Nov 2023 00:02:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 01 Nov 2023 00:09:38 +0000 Wed, 01 Nov 2023 00:09:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.43
Hostname: multinode-391061
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 47962989365f465fa8a710ebe1080a98
System UUID: 47962989-365f-465f-a8a7-10ebe1080a98
Boot ID: 343d2a39-eea8-4e0b-8c4a-ac4d1581ade2
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.3
Kube-Proxy Version: v1.28.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-gm6t7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m31s
kube-system coredns-5dd5756b68-dg5w7 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 7m39s
kube-system etcd-multinode-391061 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 7m51s
kube-system kindnet-4jfj9 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 7m39s
kube-system kube-apiserver-multinode-391061 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m51s
kube-system kube-controller-manager-multinode-391061 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m51s
kube-system kube-proxy-clsrp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m39s
kube-system kube-scheduler-multinode-391061 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m51s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m38s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m37s kube-proxy
Normal Starting 37s kube-proxy
Normal Starting 3m43s kube-proxy
Normal NodeAllocatableEnforced 7m59s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 7m59s (x8 over 7m59s) kubelet Node multinode-391061 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m59s (x7 over 7m59s) kubelet Node multinode-391061 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 7m59s (x8 over 7m59s) kubelet Node multinode-391061 status is now: NodeHasSufficientMemory
Normal Starting 7m51s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 7m51s kubelet Node multinode-391061 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m51s kubelet Node multinode-391061 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m51s kubelet Node multinode-391061 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m51s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7m40s node-controller Node multinode-391061 event: Registered Node multinode-391061 in Controller
Normal NodeReady 7m28s kubelet Node multinode-391061 status is now: NodeReady
Normal Starting 3m52s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m51s (x8 over 3m52s) kubelet Node multinode-391061 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m51s (x8 over 3m52s) kubelet Node multinode-391061 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m51s (x7 over 3m52s) kubelet Node multinode-391061 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m51s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m33s node-controller Node multinode-391061 event: Registered Node multinode-391061 in Controller
Normal Starting 47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 47s (x8 over 47s) kubelet Node multinode-391061 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 47s (x8 over 47s) kubelet Node multinode-391061 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 47s (x7 over 47s) kubelet Node multinode-391061 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 47s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 29s node-controller Node multinode-391061 event: Registered Node multinode-391061 in Controller
Name: multinode-391061-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-391061-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 01 Nov 2023 00:07:14 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-391061-m02
AcquireTime: <unset>
RenewTime: Wed, 01 Nov 2023 00:08:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 01 Nov 2023 00:07:25 +0000 Wed, 01 Nov 2023 00:07:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 01 Nov 2023 00:07:25 +0000 Wed, 01 Nov 2023 00:07:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 01 Nov 2023 00:07:25 +0000 Wed, 01 Nov 2023 00:07:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 01 Nov 2023 00:07:25 +0000 Wed, 01 Nov 2023 00:07:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.249
Hostname: multinode-391061-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 0d0e385c1d0d48059fa1f8426a07e391
System UUID: 0d0e385c-1d0d-4805-9fa1-f8426a07e391
Boot ID: cadfab0f-d241-492f-aeaa-46e564f9963c
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.3
Kube-Proxy Version: v1.28.3
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-lgqxz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m21s
kube-system kindnet-lcljq 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 6m50s
kube-system kube-proxy-rcnv9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m50s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m42s kube-proxy
Normal Starting 2m55s kube-proxy
Normal Starting 6m50s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m50s (x2 over 6m50s) kubelet Node multinode-391061-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m50s (x2 over 6m50s) kubelet Node multinode-391061-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m50s (x2 over 6m50s) kubelet Node multinode-391061-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m50s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 6m34s kubelet Node multinode-391061-m02 status is now: NodeReady
Normal Starting 2m58s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m58s (x2 over 2m58s) kubelet Node multinode-391061-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m58s (x2 over 2m58s) kubelet Node multinode-391061-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m58s (x2 over 2m58s) kubelet Node multinode-391061-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m58s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m53s node-controller Node multinode-391061-m02 event: Registered Node multinode-391061-m02 in Controller
Normal NodeReady 2m47s kubelet Node multinode-391061-m02 status is now: NodeReady
Normal RegisteredNode 29s node-controller Node multinode-391061-m02 event: Registered Node multinode-391061-m02 in Controller
*
* ==> dmesg <==
* [Nov 1 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.064584] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.313265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[Nov 1 00:09] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.132278] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.339845] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.187073] systemd-fstab-generator[513]: Ignoring "noauto" for root device
[ +0.098054] systemd-fstab-generator[524]: Ignoring "noauto" for root device
[ +1.203746] systemd-fstab-generator[768]: Ignoring "noauto" for root device
[ +0.291358] systemd-fstab-generator[807]: Ignoring "noauto" for root device
[ +0.112174] systemd-fstab-generator[818]: Ignoring "noauto" for root device
[ +0.133633] systemd-fstab-generator[831]: Ignoring "noauto" for root device
[ +1.571299] systemd-fstab-generator[1015]: Ignoring "noauto" for root device
[ +0.107000] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
[ +0.104810] systemd-fstab-generator[1037]: Ignoring "noauto" for root device
[ +0.118817] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
[ +0.123026] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
[ +12.006823] systemd-fstab-generator[1313]: Ignoring "noauto" for root device
[ +0.390251] kauditd_printk_skb: 67 callbacks suppressed
*
* ==> etcd [16f503733939] <==
* {"level":"info","ts":"2023-11-01T00:09:28.19015Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-11-01T00:09:28.190278Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-11-01T00:09:28.191119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 switched to configuration voters=(4987603935014751745)"}
{"level":"info","ts":"2023-11-01T00:09:28.191618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","added-peer-id":"4537875a7ae50e01","added-peer-peer-urls":["https://192.168.39.43:2380"]}
{"level":"info","ts":"2023-11-01T00:09:28.194717Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-11-01T00:09:28.198264Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4537875a7ae50e01","initial-advertise-peer-urls":["https://192.168.39.43:2380"],"listen-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.43:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-11-01T00:09:28.198324Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-11-01T00:09:28.192852Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","cluster-version":"3.5"}
{"level":"info","ts":"2023-11-01T00:09:28.198398Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-11-01T00:09:28.195222Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.43:2380"}
{"level":"info","ts":"2023-11-01T00:09:28.203901Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.43:2380"}
{"level":"info","ts":"2023-11-01T00:09:29.925884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 is starting a new election at term 3"}
{"level":"info","ts":"2023-11-01T00:09:29.925965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became pre-candidate at term 3"}
{"level":"info","ts":"2023-11-01T00:09:29.925987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgPreVoteResp from 4537875a7ae50e01 at term 3"}
{"level":"info","ts":"2023-11-01T00:09:29.926006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became candidate at term 4"}
{"level":"info","ts":"2023-11-01T00:09:29.926013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgVoteResp from 4537875a7ae50e01 at term 4"}
{"level":"info","ts":"2023-11-01T00:09:29.926027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became leader at term 4"}
{"level":"info","ts":"2023-11-01T00:09:29.926034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4537875a7ae50e01 elected leader 4537875a7ae50e01 at term 4"}
{"level":"info","ts":"2023-11-01T00:09:29.929012Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4537875a7ae50e01","local-member-attributes":"{Name:multinode-391061 ClientURLs:[https://192.168.39.43:2379]}","request-path":"/0/members/4537875a7ae50e01/attributes","cluster-id":"e2f92b1da63e7b06","publish-timeout":"7s"}
{"level":"info","ts":"2023-11-01T00:09:29.929031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-11-01T00:09:29.929748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-11-01T00:09:29.930821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.43:2379"}
{"level":"info","ts":"2023-11-01T00:09:29.930917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-11-01T00:09:29.931219Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-11-01T00:09:29.931355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> etcd [37d9dd0022b9] <==
* {"level":"info","ts":"2023-11-01T00:06:23.592713Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-11-01T00:06:25.029032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 is starting a new election at term 2"}
{"level":"info","ts":"2023-11-01T00:06:25.029091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became pre-candidate at term 2"}
{"level":"info","ts":"2023-11-01T00:06:25.029125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgPreVoteResp from 4537875a7ae50e01 at term 2"}
{"level":"info","ts":"2023-11-01T00:06:25.029138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became candidate at term 3"}
{"level":"info","ts":"2023-11-01T00:06:25.029144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgVoteResp from 4537875a7ae50e01 at term 3"}
{"level":"info","ts":"2023-11-01T00:06:25.029151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became leader at term 3"}
{"level":"info","ts":"2023-11-01T00:06:25.029158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4537875a7ae50e01 elected leader 4537875a7ae50e01 at term 3"}
{"level":"info","ts":"2023-11-01T00:06:25.032053Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4537875a7ae50e01","local-member-attributes":"{Name:multinode-391061 ClientURLs:[https://192.168.39.43:2379]}","request-path":"/0/members/4537875a7ae50e01/attributes","cluster-id":"e2f92b1da63e7b06","publish-timeout":"7s"}
{"level":"info","ts":"2023-11-01T00:06:25.032229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-11-01T00:06:25.032298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-11-01T00:06:25.032532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-11-01T00:06:25.03234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-11-01T00:06:25.033416Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.43:2379"}
{"level":"info","ts":"2023-11-01T00:06:25.035467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-11-01T00:08:24.506535Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-11-01T00:08:24.506671Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-391061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"]}
{"level":"warn","ts":"2023-11-01T00:08:24.506833Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2023-11-01T00:08:24.506976Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2023-11-01T00:08:24.561334Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
{"level":"warn","ts":"2023-11-01T00:08:24.561383Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
{"level":"info","ts":"2023-11-01T00:08:24.561438Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4537875a7ae50e01","current-leader-member-id":"4537875a7ae50e01"}
{"level":"info","ts":"2023-11-01T00:08:24.566054Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.43:2380"}
{"level":"info","ts":"2023-11-01T00:08:24.566194Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.43:2380"}
{"level":"info","ts":"2023-11-01T00:08:24.566206Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-391061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"]}
*
* ==> kernel <==
* 00:10:12 up 1 min, 0 users, load average: 0.53, 0.19, 0.06
Linux multinode-391061 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [5c271018fdbe] <==
* I1101 00:09:37.076606 1 main.go:102] connected to apiserver: https://10.96.0.1:443
I1101 00:09:37.076860 1 main.go:107] hostIP = 192.168.39.43
podIP = 192.168.39.43
I1101 00:09:37.077418 1 main.go:116] setting mtu 1500 for CNI
I1101 00:09:37.077435 1 main.go:146] kindnetd IP family: "ipv4"
I1101 00:09:37.077457 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
I1101 00:09:37.765325 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:09:37.765411 1 main.go:227] handling current node
I1101 00:09:37.765817 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:09:37.765920 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:09:37.766175 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.249 Flags: [] Table: 0}
I1101 00:09:47.778674 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:09:47.778897 1 main.go:227] handling current node
I1101 00:09:47.779259 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:09:47.779370 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:09:57.791597 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:09:57.791660 1 main.go:227] handling current node
I1101 00:09:57.791697 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:09:57.791707 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:10:07.806269 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:10:07.806343 1 main.go:227] handling current node
I1101 00:10:07.806360 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:10:07.806370 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
*
* ==> kindnet [7e5dd13abba8] <==
* I1101 00:07:52.418568 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:07:52.418863 1 main.go:227] handling current node
I1101 00:07:52.419038 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:07:52.419172 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:07:52.419455 1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
I1101 00:07:52.419617 1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.3.0/24]
I1101 00:08:02.433400 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:08:02.433450 1 main.go:227] handling current node
I1101 00:08:02.433469 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:08:02.433475 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:08:02.433691 1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
I1101 00:08:02.433721 1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.2.0/24]
I1101 00:08:02.433954 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.62 Flags: [] Table: 0}
I1101 00:08:12.447883 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:08:12.447998 1 main.go:227] handling current node
I1101 00:08:12.448013 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:08:12.448020 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:08:12.449781 1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
I1101 00:08:12.449820 1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.2.0/24]
I1101 00:08:22.463898 1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
I1101 00:08:22.464012 1 main.go:227] handling current node
I1101 00:08:22.464023 1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
I1101 00:08:22.464028 1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24]
I1101 00:08:22.464151 1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
I1101 00:08:22.464157 1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.2.0/24]
*
* ==> kube-apiserver [a49a86a47d7c] <==
* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1101 00:08:34.314179 1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1101 00:08:34.348000 1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1101 00:08:34.432385 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-apiserver [ad9ce8cffbbd] <==
* I1101 00:09:31.332906 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1101 00:09:31.333121 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
I1101 00:09:31.331296 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I1101 00:09:31.473440 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I1101 00:09:31.478009 1 shared_informer.go:318] Caches are synced for node_authorizer
I1101 00:09:31.524950 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I1101 00:09:31.528324 1 shared_informer.go:318] Caches are synced for configmaps
I1101 00:09:31.528549 1 apf_controller.go:377] Running API Priority and Fairness config worker
I1101 00:09:31.528556 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I1101 00:09:31.530253 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1101 00:09:31.536719 1 shared_informer.go:318] Caches are synced for crd-autoregister
I1101 00:09:31.537039 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1101 00:09:31.537669 1 aggregator.go:166] initial CRD sync complete...
I1101 00:09:31.538387 1 autoregister_controller.go:141] Starting autoregister controller
I1101 00:09:31.538397 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1101 00:09:31.538404 1 cache.go:39] Caches are synced for autoregister controller
I1101 00:09:32.337365 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1101 00:09:32.766705 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.43]
I1101 00:09:32.772589 1 controller.go:624] quota admission added evaluator for: endpoints
I1101 00:09:32.786650 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1101 00:09:34.247198 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I1101 00:09:34.484576 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I1101 00:09:34.500470 1 controller.go:624] quota admission added evaluator for: deployments.apps
I1101 00:09:34.591116 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1101 00:09:34.605275 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
*
* ==> kube-controller-manager [32294fac02b3] <==
* I1101 00:07:30.756798 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.824µs"
I1101 00:07:31.546793 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.538µs"
I1101 00:07:31.550888 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.487µs"
I1101 00:07:51.172866 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lgqxz"
I1101 00:07:51.184193 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.907187ms"
I1101 00:07:51.184323 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.747µs"
I1101 00:07:51.197194 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.955962ms"
I1101 00:07:51.197472 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="228.664µs"
I1101 00:07:51.206401 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.711µs"
I1101 00:07:53.042996 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.681385ms"
I1101 00:07:53.043314 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="263.588µs"
I1101 00:07:54.181191 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
I1101 00:07:54.292099 1 event.go:307] "Event occurred" object="multinode-391061-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-391061-m03 event: Removing Node multinode-391061-m03 from Controller"
I1101 00:07:55.043180 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
I1101 00:07:55.043308 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-391061-m03\" does not exist"
I1101 00:07:55.044961 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-8p7xh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-8p7xh"
I1101 00:07:55.067698 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-391061-m03" podCIDRs=["10.244.2.0/24"]
I1101 00:07:55.878604 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.605µs"
I1101 00:07:56.054311 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.305µs"
I1101 00:07:56.060178 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.941µs"
I1101 00:07:56.064177 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.937µs"
I1101 00:07:59.293276 1 event.go:307] "Event occurred" object="multinode-391061-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-391061-m03 event: Registered Node multinode-391061-m03 in Controller"
I1101 00:08:20.322442 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
I1101 00:08:22.787615 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
I1101 00:08:24.299109 1 event.go:307] "Event occurred" object="multinode-391061-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-391061-m03 event: Removing Node multinode-391061-m03 from Controller"
*
* ==> kube-controller-manager [c2c9b3f6a6e3] <==
* I1101 00:09:43.752727 1 shared_informer.go:318] Caches are synced for certificate-csrapproving
I1101 00:09:43.754170 1 shared_informer.go:318] Caches are synced for crt configmap
I1101 00:09:43.756628 1 shared_informer.go:318] Caches are synced for ephemeral
I1101 00:09:43.758945 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
I1101 00:09:43.761079 1 shared_informer.go:318] Caches are synced for ReplicationController
I1101 00:09:43.766052 1 shared_informer.go:318] Caches are synced for GC
I1101 00:09:43.768414 1 shared_informer.go:318] Caches are synced for node
I1101 00:09:43.768654 1 range_allocator.go:174] "Sending events to api server"
I1101 00:09:43.769022 1 range_allocator.go:178] "Starting range CIDR allocator"
I1101 00:09:43.769049 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
I1101 00:09:43.769056 1 shared_informer.go:318] Caches are synced for cidrallocator
I1101 00:09:43.774287 1 shared_informer.go:318] Caches are synced for disruption
I1101 00:09:43.776800 1 shared_informer.go:318] Caches are synced for cronjob
I1101 00:09:43.785117 1 shared_informer.go:318] Caches are synced for PV protection
I1101 00:09:43.805894 1 shared_informer.go:318] Caches are synced for deployment
I1101 00:09:43.809436 1 shared_informer.go:318] Caches are synced for ReplicaSet
I1101 00:09:43.809815 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="119.601µs"
I1101 00:09:43.809826 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.574µs"
I1101 00:09:43.819800 1 shared_informer.go:318] Caches are synced for persistent volume
I1101 00:09:43.863962 1 shared_informer.go:318] Caches are synced for resource quota
I1101 00:09:43.896418 1 shared_informer.go:318] Caches are synced for resource quota
I1101 00:09:43.899176 1 shared_informer.go:318] Caches are synced for attach detach
I1101 00:09:44.306189 1 shared_informer.go:318] Caches are synced for garbage collector
I1101 00:09:44.344919 1 shared_informer.go:318] Caches are synced for garbage collector
I1101 00:09:44.344971 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
*
* ==> kube-proxy [a5893c8acc57] <==
* I1101 00:09:33.971163 1 server_others.go:69] "Using iptables proxy"
I1101 00:09:34.046638 1 node.go:141] Successfully retrieved node IP: 192.168.39.43
I1101 00:09:34.159869 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I1101 00:09:34.159893 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1101 00:09:34.183447 1 server_others.go:152] "Using iptables Proxier"
I1101 00:09:34.183946 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I1101 00:09:34.186201 1 server.go:846] "Version info" version="v1.28.3"
I1101 00:09:34.186217 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 00:09:34.188049 1 config.go:188] "Starting service config controller"
I1101 00:09:34.188282 1 shared_informer.go:311] Waiting for caches to sync for service config
I1101 00:09:34.188372 1 config.go:97] "Starting endpoint slice config controller"
I1101 00:09:34.188378 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I1101 00:09:34.191644 1 config.go:315] "Starting node config controller"
I1101 00:09:34.191652 1 shared_informer.go:311] Waiting for caches to sync for node config
I1101 00:09:34.288910 1 shared_informer.go:318] Caches are synced for endpoint slice config
I1101 00:09:34.288969 1 shared_informer.go:318] Caches are synced for service config
I1101 00:09:34.337116 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-proxy [beeaf0ac020b] <==
* I1101 00:06:28.096795 1 server_others.go:69] "Using iptables proxy"
I1101 00:06:28.127555 1 node.go:141] Successfully retrieved node IP: 192.168.39.43
I1101 00:06:28.365777 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I1101 00:06:28.365834 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1101 00:06:28.368654 1 server_others.go:152] "Using iptables Proxier"
I1101 00:06:28.369134 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I1101 00:06:28.369499 1 server.go:846] "Version info" version="v1.28.3"
I1101 00:06:28.369511 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 00:06:28.375081 1 config.go:188] "Starting service config controller"
I1101 00:06:28.375497 1 shared_informer.go:311] Waiting for caches to sync for service config
I1101 00:06:28.375528 1 config.go:97] "Starting endpoint slice config controller"
I1101 00:06:28.375533 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I1101 00:06:28.377256 1 config.go:315] "Starting node config controller"
I1101 00:06:28.377295 1 shared_informer.go:311] Waiting for caches to sync for node config
I1101 00:06:28.641604 1 shared_informer.go:318] Caches are synced for node config
I1101 00:06:28.642049 1 shared_informer.go:318] Caches are synced for service config
I1101 00:06:28.642162 1 shared_informer.go:318] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [57698df88060] <==
* I1101 00:09:29.511193 1 serving.go:348] Generated self-signed cert in-memory
W1101 00:09:31.436874 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1101 00:09:31.436979 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1101 00:09:31.437012 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W1101 00:09:31.437076 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1101 00:09:31.479769 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
I1101 00:09:31.480029 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 00:09:31.481854 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 00:09:31.482174 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1101 00:09:31.483071 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I1101 00:09:31.483309 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1101 00:09:31.583076 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [c5ea3d84d06f] <==
* I1101 00:06:23.847682 1 serving.go:348] Generated self-signed cert in-memory
W1101 00:06:26.463897 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1101 00:06:26.464011 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1101 00:06:26.464023 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W1101 00:06:26.464029 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1101 00:06:26.499405 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
I1101 00:06:26.499451 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 00:06:26.501549 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I1101 00:06:26.502431 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 00:06:26.502487 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1101 00:06:26.502608 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1101 00:06:26.602635 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1101 00:08:24.434312 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I1101 00:08:24.434482 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I1101 00:08:24.435090 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E1101 00:08:24.435338 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Wed 2023-11-01 00:09:00 UTC, ends at Wed 2023-11-01 00:10:12 UTC. --
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.399191 1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.399277 1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.399346 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:32.899330697 +0000 UTC m=+7.850513782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.970970 1319 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.971033 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume podName:eb94555e-1465-4dec-9d6d-ebcbec02841e nodeName:}" failed. No retries permitted until 2023-11-01 00:09:33.971019978 +0000 UTC m=+8.922203058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume") pod "coredns-5dd5756b68-dg5w7" (UID: "eb94555e-1465-4dec-9d6d-ebcbec02841e") : object "kube-system"/"coredns" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.971438 1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.971452 1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.972099 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:33.97204017 +0000 UTC m=+8.923223252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983261 1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983357 1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983414 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:35.983397975 +0000 UTC m=+10.934581055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983865 1319 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983912 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume podName:eb94555e-1465-4dec-9d6d-ebcbec02841e nodeName:}" failed. No retries permitted until 2023-11-01 00:09:35.983901106 +0000 UTC m=+10.935084185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume") pod "coredns-5dd5756b68-dg5w7" (UID: "eb94555e-1465-4dec-9d6d-ebcbec02841e") : object "kube-system"/"coredns" not registered
Nov 01 00:09:34 multinode-391061 kubelet[1319]: I1101 00:09:34.131973 1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ae286f2e451298335e60ff530480a9945a5b00cbbb6a4b638e780b78fbf458"
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.025454 1319 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.025667 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume podName:eb94555e-1465-4dec-9d6d-ebcbec02841e nodeName:}" failed. No retries permitted until 2023-11-01 00:09:40.02564935 +0000 UTC m=+14.976832432 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume") pod "coredns-5dd5756b68-dg5w7" (UID: "eb94555e-1465-4dec-9d6d-ebcbec02841e") : object "kube-system"/"coredns" not registered
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.026184 1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.026205 1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.026250 1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:40.026238503 +0000 UTC m=+14.977421570 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.504880 1319 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-dg5w7" podUID="eb94555e-1465-4dec-9d6d-ebcbec02841e"
Nov 01 00:09:36 multinode-391061 kubelet[1319]: I1101 00:09:36.504956 1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde88b0de04da9bbd831a6d4c66ca23079816d358c2a073c1c844f3c823b3a46"
Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.507590 1319 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-gm6t7" podUID="2c54225b-c1bf-4e3d-9de3-dfc1676104bf"
Nov 01 00:09:38 multinode-391061 kubelet[1319]: I1101 00:09:38.193647 1319 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Nov 01 00:09:40 multinode-391061 kubelet[1319]: I1101 00:09:40.831302 1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1607f59d6ba061ddfaed58cd098e43eb0a9636f0a88d126db9b8190b719c5a2c"
Nov 01 00:09:41 multinode-391061 kubelet[1319]: I1101 00:09:41.101886 1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4ac2bcf1f1a7e97a662352c7ff24fed55ebabd9072e6380c598ee47a8bd587"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-391061 -n multinode-391061
helpers_test.go:261: (dbg) Run: kubectl --context multinode-391061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (83.62s)