=== RUN TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-040952
multinode_test.go:290: (dbg) Run: out/minikube-linux-amd64 stop -p multinode-040952
E0914 19:05:19.531409 14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-040952: (28.451242182s)
multinode_test.go:295: (dbg) Run: out/minikube-linux-amd64 start -p multinode-040952 --wait=true -v=8 --alsologtostderr
E0914 19:06:02.658871 14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:06:10.628985 14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-040952 --wait=true -v=8 --alsologtostderr: exit status 90 (1m21.375560389s)
-- stdout --
* [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17217
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node multinode-040952 in cluster multinode-040952
* Restarting existing kvm2 VM for "multinode-040952" ...
* Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
* Configuring CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Starting worker node multinode-040952-m02 in cluster multinode-040952
* Restarting existing kvm2 VM for "multinode-040952-m02" ...
* Found network options:
- NO_PROXY=192.168.39.14
-- /stdout --
** stderr **
I0914 19:05:20.962804 29302 out.go:296] Setting OutFile to fd 1 ...
I0914 19:05:20.963060 29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 19:05:20.963070 29302 out.go:309] Setting ErrFile to fd 2...
I0914 19:05:20.963075 29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 19:05:20.963243 29302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 19:05:20.963781 29302 out.go:303] Setting JSON to false
I0914 19:05:20.964724 29302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2870,"bootTime":1694715451,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0914 19:05:20.964780 29302 start.go:138] virtualization: kvm guest
I0914 19:05:20.967109 29302 out.go:177] * [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0914 19:05:20.968562 29302 out.go:177] - MINIKUBE_LOCATION=17217
I0914 19:05:20.969984 29302 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0914 19:05:20.968648 29302 notify.go:220] Checking for updates...
I0914 19:05:20.972859 29302 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:05:20.974265 29302 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
I0914 19:05:20.975509 29302 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0914 19:05:20.976805 29302 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0914 19:05:20.978678 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:05:20.978756 29302 driver.go:373] Setting default libvirt URI to qemu:///system
I0914 19:05:20.979122 29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 19:05:20.979158 29302 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 19:05:20.994127 29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
I0914 19:05:20.994544 29302 main.go:141] libmachine: () Calling .GetVersion
I0914 19:05:20.994996 29302 main.go:141] libmachine: Using API Version 1
I0914 19:05:20.995035 29302 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 19:05:20.995534 29302 main.go:141] libmachine: () Calling .GetMachineName
I0914 19:05:20.995713 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:21.030837 29302 out.go:177] * Using the kvm2 driver based on existing profile
I0914 19:05:21.032222 29302 start.go:298] selected driver: kvm2
I0914 19:05:21.032235 29302 start.go:902] validating driver "kvm2" against &{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0914 19:05:21.032388 29302 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0914 19:05:21.032684 29302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 19:05:21.032744 29302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0914 19:05:21.046926 29302 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0914 19:05:21.047549 29302 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0914 19:05:21.047615 29302 cni.go:84] Creating CNI manager for ""
I0914 19:05:21.047628 29302 cni.go:136] 3 nodes found, recommending kindnet
I0914 19:05:21.047635 29302 start_flags.go:321] config:
{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
I0914 19:05:21.047846 29302 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 19:05:21.049820 29302 out.go:177] * Starting control plane node multinode-040952 in cluster multinode-040952
I0914 19:05:21.051078 29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0914 19:05:21.051117 29302 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
I0914 19:05:21.051132 29302 cache.go:57] Caching tarball of preloaded images
I0914 19:05:21.051200 29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0914 19:05:21.051211 29302 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0914 19:05:21.051357 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:05:21.051546 29302 start.go:365] acquiring machines lock for multinode-040952: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 19:05:21.051585 29302 start.go:369] acquired machines lock for "multinode-040952" in 22.658µs
I0914 19:05:21.051598 29302 start.go:96] Skipping create...Using existing machine configuration
I0914 19:05:21.051604 29302 fix.go:54] fixHost starting:
I0914 19:05:21.051851 29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 19:05:21.051877 29302 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 19:05:21.065211 29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
I0914 19:05:21.065673 29302 main.go:141] libmachine: () Calling .GetVersion
I0914 19:05:21.066137 29302 main.go:141] libmachine: Using API Version 1
I0914 19:05:21.066161 29302 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 19:05:21.066462 29302 main.go:141] libmachine: () Calling .GetMachineName
I0914 19:05:21.066623 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:21.066770 29302 main.go:141] libmachine: (multinode-040952) Calling .GetState
I0914 19:05:21.068116 29302 fix.go:102] recreateIfNeeded on multinode-040952: state=Stopped err=<nil>
I0914 19:05:21.068149 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
W0914 19:05:21.068327 29302 fix.go:128] unexpected machine state, will restart: <nil>
I0914 19:05:21.070143 29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952" ...
I0914 19:05:21.071437 29302 main.go:141] libmachine: (multinode-040952) Calling .Start
I0914 19:05:21.071593 29302 main.go:141] libmachine: (multinode-040952) Ensuring networks are active...
I0914 19:05:21.072249 29302 main.go:141] libmachine: (multinode-040952) Ensuring network default is active
I0914 19:05:21.072599 29302 main.go:141] libmachine: (multinode-040952) Ensuring network mk-multinode-040952 is active
I0914 19:05:21.072924 29302 main.go:141] libmachine: (multinode-040952) Getting domain xml...
I0914 19:05:21.073627 29302 main.go:141] libmachine: (multinode-040952) Creating domain...
I0914 19:05:22.290792 29302 main.go:141] libmachine: (multinode-040952) Waiting to get IP...
I0914 19:05:22.291697 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:22.292055 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:22.292102 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.292035 29331 retry.go:31] will retry after 308.296154ms: waiting for machine to come up
I0914 19:05:22.601636 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:22.602066 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:22.602099 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.602024 29331 retry.go:31] will retry after 317.837388ms: waiting for machine to come up
I0914 19:05:22.921508 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:22.921867 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:22.921901 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.921847 29331 retry.go:31] will retry after 471.086167ms: waiting for machine to come up
I0914 19:05:23.394404 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:23.394838 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:23.394871 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.394792 29331 retry.go:31] will retry after 484.306086ms: waiting for machine to come up
I0914 19:05:23.880204 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:23.880564 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:23.880583 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.880535 29331 retry.go:31] will retry after 618.601122ms: waiting for machine to come up
I0914 19:05:24.500881 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:24.501312 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:24.501338 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:24.501260 29331 retry.go:31] will retry after 909.340951ms: waiting for machine to come up
I0914 19:05:25.412225 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:25.412602 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:25.412643 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:25.412551 29331 retry.go:31] will retry after 1.126879825s: waiting for machine to come up
I0914 19:05:26.540657 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:26.541060 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:26.541092 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:26.541009 29331 retry.go:31] will retry after 1.102019824s: waiting for machine to come up
I0914 19:05:27.644123 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:27.644509 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:27.644533 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:27.644464 29331 retry.go:31] will retry after 1.486754446s: waiting for machine to come up
I0914 19:05:29.133039 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:29.133510 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:29.133535 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:29.133470 29331 retry.go:31] will retry after 2.117464983s: waiting for machine to come up
I0914 19:05:31.252796 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:31.253157 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:31.253189 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:31.253114 29331 retry.go:31] will retry after 2.386416431s: waiting for machine to come up
I0914 19:05:33.642490 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:33.643052 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:33.643079 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:33.643013 29331 retry.go:31] will retry after 2.611013914s: waiting for machine to come up
I0914 19:05:36.255832 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:36.256237 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:36.256259 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:36.256195 29331 retry.go:31] will retry after 4.317080822s: waiting for machine to come up
I0914 19:05:40.578744 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.579178 29302 main.go:141] libmachine: (multinode-040952) Found IP for machine: 192.168.39.14
I0914 19:05:40.579199 29302 main.go:141] libmachine: (multinode-040952) Reserving static IP address...
I0914 19:05:40.579208 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has current primary IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.579755 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.579790 29302 main.go:141] libmachine: (multinode-040952) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"}
I0914 19:05:40.579808 29302 main.go:141] libmachine: (multinode-040952) Reserved static IP address: 192.168.39.14
I0914 19:05:40.579828 29302 main.go:141] libmachine: (multinode-040952) Waiting for SSH to be available...
I0914 19:05:40.579844 29302 main.go:141] libmachine: (multinode-040952) DBG | Getting to WaitForSSH function...
I0914 19:05:40.581922 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.582219 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.582248 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.582419 29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH client type: external
I0914 19:05:40.582441 29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa (-rw-------)
I0914 19:05:40.582466 29302 main.go:141] libmachine: (multinode-040952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa -p 22] /usr/bin/ssh <nil>}
I0914 19:05:40.582480 29302 main.go:141] libmachine: (multinode-040952) DBG | About to run SSH command:
I0914 19:05:40.582491 29302 main.go:141] libmachine: (multinode-040952) DBG | exit 0
I0914 19:05:40.677125 29302 main.go:141] libmachine: (multinode-040952) DBG | SSH cmd err, output: <nil>:
I0914 19:05:40.677493 29302 main.go:141] libmachine: (multinode-040952) Calling .GetConfigRaw
I0914 19:05:40.678081 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:40.680506 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.680910 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.680945 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.681103 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:05:40.681284 29302 machine.go:88] provisioning docker machine ...
I0914 19:05:40.681323 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:40.681566 29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
I0914 19:05:40.681734 29302 buildroot.go:166] provisioning hostname "multinode-040952"
I0914 19:05:40.681755 29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
I0914 19:05:40.681906 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:40.683964 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.684284 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.684307 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.684417 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:40.684595 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.684736 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.684890 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:40.685062 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:40.685397 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:40.685412 29302 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-040952 && echo "multinode-040952" | sudo tee /etc/hostname
I0914 19:05:40.823251 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952
I0914 19:05:40.823283 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:40.825791 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.826169 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.826206 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.826321 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:40.826510 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.826658 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.826793 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:40.826952 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:40.827274 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:40.827292 29302 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-040952' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952/g' /etc/hosts;
else
echo '127.0.1.1 multinode-040952' | sudo tee -a /etc/hosts;
fi
fi
I0914 19:05:40.958211 29302 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0914 19:05:40.958234 29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
I0914 19:05:40.958251 29302 buildroot.go:174] setting up certificates
I0914 19:05:40.958258 29302 provision.go:83] configureAuth start
I0914 19:05:40.958270 29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
I0914 19:05:40.958579 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:40.960950 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.961279 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.961310 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.961443 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:40.963552 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.964139 29302 provision.go:138] copyHostCerts
I0914 19:05:40.966068 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.966080 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:05:40.966098 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.966106 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
I0914 19:05:40.966111 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:05:40.966169 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
I0914 19:05:40.966263 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:05:40.966284 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
I0914 19:05:40.966291 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:05:40.966314 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
I0914 19:05:40.966407 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:05:40.966426 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
I0914 19:05:40.966429 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:05:40.966455 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
I0914 19:05:40.966496 29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952 san=[192.168.39.14 192.168.39.14 localhost 127.0.0.1 minikube multinode-040952]
I0914 19:05:41.093709 29302 provision.go:172] copyRemoteCerts
I0914 19:05:41.093761 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0914 19:05:41.093784 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.096513 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.096889 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.096919 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.097089 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.097303 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.097427 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.097563 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:41.185959 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0914 19:05:41.186035 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0914 19:05:41.209076 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
I0914 19:05:41.209136 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0914 19:05:41.231360 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0914 19:05:41.231432 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0914 19:05:41.253346 29302 provision.go:86] duration metric: configureAuth took 295.075916ms
I0914 19:05:41.253364 29302 buildroot.go:189] setting minikube options for container-runtime
I0914 19:05:41.253583 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:05:41.253604 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:41.253889 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.256397 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.256706 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.256746 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.256796 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.256990 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.257147 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.257300 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.257433 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:41.257764 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:41.257781 29302 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0914 19:05:41.378606 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0914 19:05:41.378636 29302 buildroot.go:70] root file system type: tmpfs
I0914 19:05:41.378779 29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0914 19:05:41.378811 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.381344 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.381631 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.381653 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.381854 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.382017 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.382151 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.382256 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.382401 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:41.382846 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:41.382955 29302 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0914 19:05:41.524710 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0914 19:05:41.524751 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.527598 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.528021 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.528050 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.528233 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.528403 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.528520 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.528618 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.528833 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:41.529147 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:41.529175 29302 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0914 19:05:42.395560 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0914 19:05:42.395591 29302 machine.go:91] provisioned docker machine in 1.714293106s
I0914 19:05:42.395605 29302 start.go:300] post-start starting for "multinode-040952" (driver="kvm2")
I0914 19:05:42.395617 29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0914 19:05:42.395637 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.395990 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0914 19:05:42.396021 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.398544 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.398997 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.399029 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.399146 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.399327 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.399452 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.399604 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:42.490598 29302 ssh_runner.go:195] Run: cat /etc/os-release
I0914 19:05:42.494659 29302 command_runner.go:130] > NAME=Buildroot
I0914 19:05:42.494675 29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
I0914 19:05:42.494679 29302 command_runner.go:130] > ID=buildroot
I0914 19:05:42.494684 29302 command_runner.go:130] > VERSION_ID=2021.02.12
I0914 19:05:42.494689 29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0914 19:05:42.494714 29302 info.go:137] Remote host: Buildroot 2021.02.12
I0914 19:05:42.494726 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
I0914 19:05:42.494786 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
I0914 19:05:42.494859 29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
I0914 19:05:42.494867 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
I0914 19:05:42.494949 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0914 19:05:42.504158 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
I0914 19:05:42.526832 29302 start.go:303] post-start completed in 131.213234ms
I0914 19:05:42.526851 29302 fix.go:56] fixHost completed within 21.475246623s
I0914 19:05:42.526869 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.529527 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.529937 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.529986 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.530137 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.530338 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.530471 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.530592 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.530728 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:42.531030 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:42.531041 29302 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0914 19:05:42.654398 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718342.602499385
I0914 19:05:42.654428 29302 fix.go:206] guest clock: 1694718342.602499385
I0914 19:05:42.654435 29302 fix.go:219] Guest: 2023-09-14 19:05:42.602499385 +0000 UTC Remote: 2023-09-14 19:05:42.526854621 +0000 UTC m=+21.595630701 (delta=75.644764ms)
I0914 19:05:42.654452 29302 fix.go:190] guest clock delta is within tolerance: 75.644764ms
I0914 19:05:42.654457 29302 start.go:83] releasing machines lock for "multinode-040952", held for 21.60286411s
I0914 19:05:42.654478 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.654724 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:42.657287 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.657640 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.657674 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.657831 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.658283 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.658453 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.658514 29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0914 19:05:42.658551 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.658645 29302 ssh_runner.go:195] Run: cat /version.json
I0914 19:05:42.658666 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.660832 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661105 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661257 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.661287 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661432 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.661445 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.661474 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661579 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.661683 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.661749 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.661825 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.661884 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.661944 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:42.661988 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:42.746664 29302 command_runner.go:130] > {"iso_version": "v1.31.0-1694468241-17194", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "08513a9f809e39764bdb93fc427d760a652ba5ea"}
I0914 19:05:42.747194 29302 ssh_runner.go:195] Run: systemctl --version
I0914 19:05:42.773722 29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0914 19:05:42.773771 29302 command_runner.go:130] > systemd 247 (247)
I0914 19:05:42.773794 29302 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0914 19:05:42.773870 29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0914 19:05:42.779663 29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0914 19:05:42.779691 29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0914 19:05:42.779753 29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0914 19:05:42.796458 29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0914 19:05:42.796494 29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0914 19:05:42.796506 29302 start.go:469] detecting cgroup driver to use...
I0914 19:05:42.796618 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:05:42.814727 29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0914 19:05:42.815085 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0914 19:05:42.825286 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0914 19:05:42.835590 29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0914 19:05:42.835639 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0914 19:05:42.845397 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:05:42.855075 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0914 19:05:42.864775 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:05:42.874625 29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0914 19:05:42.885032 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0914 19:05:42.895300 29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0914 19:05:42.904333 29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0914 19:05:42.904406 29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0914 19:05:42.913443 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:43.014402 29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0914 19:05:43.034266 29302 start.go:469] detecting cgroup driver to use...
I0914 19:05:43.034341 29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0914 19:05:43.046339 29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0914 19:05:43.047277 29302 command_runner.go:130] > [Unit]
I0914 19:05:43.047292 29302 command_runner.go:130] > Description=Docker Application Container Engine
I0914 19:05:43.047300 29302 command_runner.go:130] > Documentation=https://docs.docker.com
I0914 19:05:43.047311 29302 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0914 19:05:43.047321 29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0914 19:05:43.047330 29302 command_runner.go:130] > StartLimitBurst=3
I0914 19:05:43.047340 29302 command_runner.go:130] > StartLimitIntervalSec=60
I0914 19:05:43.047347 29302 command_runner.go:130] > [Service]
I0914 19:05:43.047354 29302 command_runner.go:130] > Type=notify
I0914 19:05:43.047374 29302 command_runner.go:130] > Restart=on-failure
I0914 19:05:43.047387 29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0914 19:05:43.047408 29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0914 19:05:43.047423 29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0914 19:05:43.047437 29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0914 19:05:43.047453 29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0914 19:05:43.047465 29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0914 19:05:43.047478 29302 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0914 19:05:43.047499 29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0914 19:05:43.047514 29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0914 19:05:43.047523 29302 command_runner.go:130] > ExecStart=
I0914 19:05:43.047549 29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0914 19:05:43.047562 29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0914 19:05:43.047574 29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0914 19:05:43.047589 29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0914 19:05:43.047600 29302 command_runner.go:130] > LimitNOFILE=infinity
I0914 19:05:43.047609 29302 command_runner.go:130] > LimitNPROC=infinity
I0914 19:05:43.047619 29302 command_runner.go:130] > LimitCORE=infinity
I0914 19:05:43.047632 29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0914 19:05:43.047647 29302 command_runner.go:130] > # Only systemd 226 and above support this version.
I0914 19:05:43.047657 29302 command_runner.go:130] > TasksMax=infinity
I0914 19:05:43.047668 29302 command_runner.go:130] > TimeoutStartSec=0
I0914 19:05:43.047682 29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0914 19:05:43.047692 29302 command_runner.go:130] > Delegate=yes
I0914 19:05:43.047706 29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0914 19:05:43.047716 29302 command_runner.go:130] > KillMode=process
I0914 19:05:43.047721 29302 command_runner.go:130] > [Install]
I0914 19:05:43.047732 29302 command_runner.go:130] > WantedBy=multi-user.target
I0914 19:05:43.047831 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:05:43.059348 29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0914 19:05:43.076586 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:05:43.091070 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:05:43.103630 29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0914 19:05:43.127566 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:05:43.140558 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:05:43.157218 29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0914 19:05:43.157773 29302 ssh_runner.go:195] Run: which cri-dockerd
I0914 19:05:43.161227 29302 command_runner.go:130] > /usr/bin/cri-dockerd
I0914 19:05:43.161332 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0914 19:05:43.168999 29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0914 19:05:43.184057 29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0914 19:05:43.293264 29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0914 19:05:43.399283 29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0914 19:05:43.399314 29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0914 19:05:43.416580 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:43.527824 29302 ssh_runner.go:195] Run: sudo systemctl restart docker
I0914 19:05:43.992016 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:05:44.097079 29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0914 19:05:44.209025 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:05:44.320513 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:44.428053 29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0914 19:05:44.444720 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:44.552820 29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0914 19:05:44.632416 29302 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0914 19:05:44.632491 29302 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0914 19:05:44.638252 29302 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0914 19:05:44.638276 29302 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0914 19:05:44.638286 29302 command_runner.go:130] > Device: 16h/22d Inode: 831 Links: 1
I0914 19:05:44.638296 29302 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0914 19:05:44.638305 29302 command_runner.go:130] > Access: 2023-09-14 19:05:44.514543091 +0000
I0914 19:05:44.638313 29302 command_runner.go:130] > Modify: 2023-09-14 19:05:44.514543091 +0000
I0914 19:05:44.638326 29302 command_runner.go:130] > Change: 2023-09-14 19:05:44.517543091 +0000
I0914 19:05:44.638332 29302 command_runner.go:130] > Birth: -
I0914 19:05:44.638715 29302 start.go:537] Will wait 60s for crictl version
I0914 19:05:44.638765 29302 ssh_runner.go:195] Run: which crictl
I0914 19:05:44.642939 29302 command_runner.go:130] > /usr/bin/crictl
I0914 19:05:44.643309 29302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0914 19:05:44.681642 29302 command_runner.go:130] > Version: 0.1.0
I0914 19:05:44.681667 29302 command_runner.go:130] > RuntimeName: docker
I0914 19:05:44.681672 29302 command_runner.go:130] > RuntimeVersion: 24.0.6
I0914 19:05:44.681678 29302 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0914 19:05:44.683160 29302 start.go:553] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1alpha2
I0914 19:05:44.683219 29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0914 19:05:44.707204 29302 command_runner.go:130] > 24.0.6
I0914 19:05:44.708405 29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0914 19:05:44.736598 29302 command_runner.go:130] > 24.0.6
I0914 19:05:44.738686 29302 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
I0914 19:05:44.738719 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:44.741297 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:44.741690 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:44.741717 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:44.741894 29302 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0914 19:05:44.745777 29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0914 19:05:44.758482 29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0914 19:05:44.758533 29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0914 19:05:44.777353 29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
I0914 19:05:44.777369 29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
I0914 19:05:44.777375 29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
I0914 19:05:44.777380 29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
I0914 19:05:44.777385 29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I0914 19:05:44.777389 29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I0914 19:05:44.777395 29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0914 19:05:44.777399 29302 command_runner.go:130] > registry.k8s.io/pause:3.9
I0914 19:05:44.777404 29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0914 19:05:44.777409 29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0914 19:05:44.777499 29302 docker.go:636] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-scheduler:v1.28.1
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0914 19:05:44.777521 29302 docker.go:566] Images already preloaded, skipping extraction
I0914 19:05:44.777580 29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0914 19:05:44.796442 29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
I0914 19:05:44.796466 29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
I0914 19:05:44.796474 29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
I0914 19:05:44.796487 29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
I0914 19:05:44.796495 29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I0914 19:05:44.796502 29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I0914 19:05:44.796510 29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0914 19:05:44.796517 29302 command_runner.go:130] > registry.k8s.io/pause:3.9
I0914 19:05:44.796526 29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0914 19:05:44.796533 29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0914 19:05:44.796582 29302 docker.go:636] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-scheduler:v1.28.1
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0914 19:05:44.796603 29302 cache_images.go:84] Images are preloaded, skipping loading
I0914 19:05:44.796662 29302 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0914 19:05:44.826844 29302 command_runner.go:130] > cgroupfs
I0914 19:05:44.827994 29302 cni.go:84] Creating CNI manager for ""
I0914 19:05:44.828012 29302 cni.go:136] 3 nodes found, recommending kindnet
I0914 19:05:44.828028 29302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0914 19:05:44.828050 29302 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-040952 NodeName:multinode-040952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0914 19:05:44.828163 29302 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.14
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-040952"
kubeletExtraArgs:
node-ip: 192.168.39.14
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0914 19:05:44.828241 29302 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-040952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
[Install]
config:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0914 19:05:44.828290 29302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
I0914 19:05:44.837426 29302 command_runner.go:130] > kubeadm
I0914 19:05:44.837444 29302 command_runner.go:130] > kubectl
I0914 19:05:44.837448 29302 command_runner.go:130] > kubelet
I0914 19:05:44.837478 29302 binaries.go:44] Found k8s binaries, skipping transfer
I0914 19:05:44.837538 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0914 19:05:44.845710 29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I0914 19:05:44.861289 29302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0914 19:05:44.876364 29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I0914 19:05:44.892748 29302 ssh_runner.go:195] Run: grep 192.168.39.14 control-plane.minikube.internal$ /etc/hosts
I0914 19:05:44.896225 29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.14 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0914 19:05:44.908521 29302 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952 for IP: 192.168.39.14
I0914 19:05:44.908554 29302 certs.go:190] acquiring lock for shared ca certs: {Name:mk8231a646ae91c44c394a9ea29f867fd3f74220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:05:44.908702 29302 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key
I0914 19:05:44.908750 29302 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key
I0914 19:05:44.908825 29302 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key
I0914 19:05:44.908896 29302 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key.ba52ec04
I0914 19:05:44.908936 29302 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key
I0914 19:05:44.908959 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0914 19:05:44.908984 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0914 19:05:44.909003 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0914 19:05:44.909021 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0914 19:05:44.909038 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0914 19:05:44.909057 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0914 19:05:44.909069 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0914 19:05:44.909083 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0914 19:05:44.909133 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem (1338 bytes)
W0914 19:05:44.909164 29302 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506_empty.pem, impossibly tiny 0 bytes
I0914 19:05:44.909175 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem (1679 bytes)
I0914 19:05:44.909194 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem (1082 bytes)
I0914 19:05:44.909221 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem (1123 bytes)
I0914 19:05:44.909246 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem (1679 bytes)
I0914 19:05:44.909284 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem (1708 bytes)
I0914 19:05:44.909309 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem -> /usr/share/ca-certificates/14506.pem
I0914 19:05:44.909322 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /usr/share/ca-certificates/145062.pem
I0914 19:05:44.909336 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:44.909846 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0914 19:05:44.934419 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0914 19:05:44.957511 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0914 19:05:44.980559 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0914 19:05:45.004923 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0914 19:05:45.028375 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0914 19:05:45.051817 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0914 19:05:45.074510 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0914 19:05:45.098260 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem --> /usr/share/ca-certificates/14506.pem (1338 bytes)
I0914 19:05:45.121292 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /usr/share/ca-certificates/145062.pem (1708 bytes)
I0914 19:05:45.144038 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0914 19:05:45.166026 29302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0914 19:05:45.181807 29302 ssh_runner.go:195] Run: openssl version
I0914 19:05:45.187376 29302 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0914 19:05:45.187428 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14506.pem && ln -fs /usr/share/ca-certificates/14506.pem /etc/ssl/certs/14506.pem"
I0914 19:05:45.196849 29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14506.pem
I0914 19:05:45.201160 29302 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
I0914 19:05:45.201218 29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
I0914 19:05:45.201259 29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14506.pem
I0914 19:05:45.206455 29302 command_runner.go:130] > 51391683
I0914 19:05:45.206657 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14506.pem /etc/ssl/certs/51391683.0"
I0914 19:05:45.216148 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145062.pem && ln -fs /usr/share/ca-certificates/145062.pem /etc/ssl/certs/145062.pem"
I0914 19:05:45.225498 29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145062.pem
I0914 19:05:45.229584 29302 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
I0914 19:05:45.229749 29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
I0914 19:05:45.229794 29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145062.pem
I0914 19:05:45.235209 29302 command_runner.go:130] > 3ec20f2e
I0914 19:05:45.235283 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145062.pem /etc/ssl/certs/3ec20f2e.0"
I0914 19:05:45.244557 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0914 19:05:45.253825 29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.258352 29302 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.258379 29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.258421 29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.263679 29302 command_runner.go:130] > b5213941
I0914 19:05:45.263724 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0914 19:05:45.273201 29302 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0914 19:05:45.277387 29302 command_runner.go:130] > ca.crt
I0914 19:05:45.277404 29302 command_runner.go:130] > ca.key
I0914 19:05:45.277412 29302 command_runner.go:130] > healthcheck-client.crt
I0914 19:05:45.277419 29302 command_runner.go:130] > healthcheck-client.key
I0914 19:05:45.277426 29302 command_runner.go:130] > peer.crt
I0914 19:05:45.277433 29302 command_runner.go:130] > peer.key
I0914 19:05:45.277439 29302 command_runner.go:130] > server.crt
I0914 19:05:45.277446 29302 command_runner.go:130] > server.key
I0914 19:05:45.277502 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0914 19:05:45.283251 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.283310 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0914 19:05:45.289331 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.289405 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0914 19:05:45.295261 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.295329 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0914 19:05:45.300680 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.300910 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0914 19:05:45.306424 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.306599 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0914 19:05:45.311906 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.312249 29302 kubeadm.go:404] StartCluster: {Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0914 19:05:45.312423 29302 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0914 19:05:45.331162 29302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0914 19:05:45.340190 29302 command_runner.go:130] > /var/lib/kubelet/config.yaml
I0914 19:05:45.340212 29302 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I0914 19:05:45.340221 29302 command_runner.go:130] > /var/lib/minikube/etcd:
I0914 19:05:45.340226 29302 command_runner.go:130] > member
I0914 19:05:45.340246 29302 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0914 19:05:45.340267 29302 kubeadm.go:636] restartCluster start
I0914 19:05:45.340309 29302 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0914 19:05:45.348452 29302 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0914 19:05:45.348894 29302 kubeconfig.go:135] verify returned: extract IP: "multinode-040952" does not appear in /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:05:45.348998 29302 kubeconfig.go:146] "multinode-040952" context is missing from /home/jenkins/minikube-integration/17217-7285/kubeconfig - will repair!
I0914 19:05:45.349266 29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:05:45.349662 29302 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:05:45.349849 29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0914 19:05:45.350444 29302 cert_rotation.go:137] Starting client certificate rotation controller
I0914 19:05:45.350587 29302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0914 19:05:45.358418 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:45.358456 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:45.368403 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:45.368429 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:45.368512 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:45.378454 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:45.879114 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:45.879187 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:45.890404 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:46.379073 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:46.379137 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:46.390460 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:46.878635 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:46.878712 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:46.890234 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:47.378771 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:47.378861 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:47.390972 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:47.879569 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:47.879636 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:47.891015 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:48.378618 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:48.378691 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:48.390037 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:48.878591 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:48.878656 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:48.889682 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:49.379283 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:49.379348 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:49.390298 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:49.878830 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:49.878929 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:49.890070 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:50.378594 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:50.378669 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:50.389750 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:50.879406 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:50.879474 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:50.890792 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:51.378749 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:51.378818 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:51.390362 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:51.878913 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:51.878983 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:51.890684 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:52.379313 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:52.379396 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:52.390412 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:52.878965 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:52.879054 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:52.890079 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:53.378659 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:53.378734 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:53.389835 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:53.879480 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:53.879549 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:53.890643 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:54.379316 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:54.379396 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:54.390543 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:54.879126 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:54.879190 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:54.890939 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:55.358694 29302 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I0914 19:05:55.358719 29302 kubeadm.go:1128] stopping kube-system containers ...
I0914 19:05:55.358774 29302 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0914 19:05:55.380728 29302 command_runner.go:130] > 5ca168b256ec
I0914 19:05:55.380744 29302 command_runner.go:130] > bda018c9a602
I0914 19:05:55.380748 29302 command_runner.go:130] > fb2dbcea99e9
I0914 19:05:55.380752 29302 command_runner.go:130] > 2de9c2baa72f
I0914 19:05:55.380756 29302 command_runner.go:130] > 1dac2d18ee96
I0914 19:05:55.380760 29302 command_runner.go:130] > bd14e8416f22
I0914 19:05:55.380764 29302 command_runner.go:130] > 2c6b193d8f06
I0914 19:05:55.380768 29302 command_runner.go:130] > ac89590af9af
I0914 19:05:55.380771 29302 command_runner.go:130] > e7dd2a8d2bf2
I0914 19:05:55.380776 29302 command_runner.go:130] > 79de1cbad023
I0914 19:05:55.380780 29302 command_runner.go:130] > bdae306df774
I0914 19:05:55.380783 29302 command_runner.go:130] > 7ae1932584ff
I0914 19:05:55.380787 29302 command_runner.go:130] > 3204588282f3
I0914 19:05:55.380790 29302 command_runner.go:130] > c60a4b7edf2a
I0914 19:05:55.380794 29302 command_runner.go:130] > bf69af78fefd
I0914 19:05:55.380798 29302 command_runner.go:130] > 992d221cf3de
I0914 19:05:55.381007 29302 docker.go:462] Stopping containers: [5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de]
I0914 19:05:55.381063 29302 ssh_runner.go:195] Run: docker stop 5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de
I0914 19:05:55.400500 29302 command_runner.go:130] > 5ca168b256ec
I0914 19:05:55.400523 29302 command_runner.go:130] > bda018c9a602
I0914 19:05:55.400528 29302 command_runner.go:130] > fb2dbcea99e9
I0914 19:05:55.400532 29302 command_runner.go:130] > 2de9c2baa72f
I0914 19:05:55.400537 29302 command_runner.go:130] > 1dac2d18ee96
I0914 19:05:55.400545 29302 command_runner.go:130] > bd14e8416f22
I0914 19:05:55.400549 29302 command_runner.go:130] > 2c6b193d8f06
I0914 19:05:55.400915 29302 command_runner.go:130] > ac89590af9af
I0914 19:05:55.400933 29302 command_runner.go:130] > e7dd2a8d2bf2
I0914 19:05:55.400941 29302 command_runner.go:130] > 79de1cbad023
I0914 19:05:55.400947 29302 command_runner.go:130] > bdae306df774
I0914 19:05:55.400953 29302 command_runner.go:130] > 7ae1932584ff
I0914 19:05:55.400959 29302 command_runner.go:130] > 3204588282f3
I0914 19:05:55.400965 29302 command_runner.go:130] > c60a4b7edf2a
I0914 19:05:55.400970 29302 command_runner.go:130] > bf69af78fefd
I0914 19:05:55.400976 29302 command_runner.go:130] > 992d221cf3de
I0914 19:05:55.402045 29302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0914 19:05:55.416372 29302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0914 19:05:55.424910 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0914 19:05:55.424932 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0914 19:05:55.424943 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0914 19:05:55.424952 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0914 19:05:55.424980 29302 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0914 19:05:55.425021 29302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0914 19:05:55.433299 29302 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0914 19:05:55.433317 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:55.549527 29302 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0914 19:05:55.549554 29302 command_runner.go:130] > [certs] Using existing ca certificate authority
I0914 19:05:55.549564 29302 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0914 19:05:55.549574 29302 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0914 19:05:55.549583 29302 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I0914 19:05:55.549599 29302 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I0914 19:05:55.549609 29302 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I0914 19:05:55.549615 29302 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I0914 19:05:55.549624 29302 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I0914 19:05:55.549633 29302 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0914 19:05:55.549640 29302 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0914 19:05:55.549657 29302 command_runner.go:130] > [certs] Using the existing "sa" key
I0914 19:05:55.549745 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:55.598988 29302 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0914 19:05:55.824313 29302 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0914 19:05:55.900894 29302 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0914 19:05:56.276915 29302 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0914 19:05:56.339928 29302 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0914 19:05:56.342661 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:56.405203 29302 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0914 19:05:56.406633 29302 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0914 19:05:56.407055 29302 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0914 19:05:56.524034 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:56.589683 29302 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0914 19:05:56.589714 29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0914 19:05:56.593812 29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0914 19:05:56.595032 29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0914 19:05:56.597321 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:56.696497 29302 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0914 19:05:56.699815 29302 api_server.go:52] waiting for apiserver process to appear ...
I0914 19:05:56.699898 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:56.713289 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:57.226345 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:57.726390 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:58.226095 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:58.726390 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:59.226644 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:59.241067 29302 command_runner.go:130] > 1693
I0914 19:05:59.241381 29302 api_server.go:72] duration metric: took 2.541565826s to wait for apiserver process to appear ...
I0914 19:05:59.241402 29302 api_server.go:88] waiting for apiserver healthz status ...
I0914 19:05:59.241422 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:02.195757 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0914 19:06:02.195786 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0914 19:06:02.195796 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:02.307219 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0914 19:06:02.307250 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0914 19:06:02.807963 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:02.814842 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0914 19:06:02.814876 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0914 19:06:03.307503 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:03.315888 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0914 19:06:03.315914 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0914 19:06:03.807505 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:03.812721 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
ok
I0914 19:06:03.812788 29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
I0914 19:06:03.812794 29302 round_trippers.go:469] Request Headers:
I0914 19:06:03.812802 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:03.812809 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:03.821345 29302 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0914 19:06:03.821376 29302 round_trippers.go:577] Response Headers:
I0914 19:06:03.821387 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:03.821396 29302 round_trippers.go:580] Content-Length: 263
I0914 19:06:03.821402 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:03 GMT
I0914 19:06:03.821410 29302 round_trippers.go:580] Audit-Id: a2a9e97f-3007-4290-8f99-481d06fc6049
I0914 19:06:03.821417 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:03.821424 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:03.821433 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:03.821483 29302 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.1",
"gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
"gitTreeState": "clean",
"buildDate": "2023-08-24T11:16:30Z",
"goVersion": "go1.20.7",
"compiler": "gc",
"platform": "linux/amd64"
}
I0914 19:06:03.821569 29302 api_server.go:141] control plane version: v1.28.1
I0914 19:06:03.821589 29302 api_server.go:131] duration metric: took 4.580178903s to wait for apiserver health ...
I0914 19:06:03.821600 29302 cni.go:84] Creating CNI manager for ""
I0914 19:06:03.821611 29302 cni.go:136] 3 nodes found, recommending kindnet
I0914 19:06:03.823525 29302 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0914 19:06:03.825085 29302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0914 19:06:03.832345 29302 command_runner.go:130] > File: /opt/cni/bin/portmap
I0914 19:06:03.832364 29302 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I0914 19:06:03.832370 29302 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I0914 19:06:03.832380 29302 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0914 19:06:03.832391 29302 command_runner.go:130] > Access: 2023-09-14 19:05:33.824543091 +0000
I0914 19:06:03.832399 29302 command_runner.go:130] > Modify: 2023-09-12 03:24:25.000000000 +0000
I0914 19:06:03.832416 29302 command_runner.go:130] > Change: 2023-09-14 19:05:31.874543091 +0000
I0914 19:06:03.832422 29302 command_runner.go:130] > Birth: -
I0914 19:06:03.832466 29302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
I0914 19:06:03.832475 29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0914 19:06:03.901488 29302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0914 19:06:05.205755 29302 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0914 19:06:05.209188 29302 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0914 19:06:05.212024 29302 command_runner.go:130] > serviceaccount/kindnet unchanged
I0914 19:06:05.225376 29302 command_runner.go:130] > daemonset.apps/kindnet configured
I0914 19:06:05.229823 29302 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.32829993s)
I0914 19:06:05.229853 29302 system_pods.go:43] waiting for kube-system pods to appear ...
I0914 19:06:05.229964 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:05.229975 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.229982 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.229988 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.234117 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:05.234139 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.234149 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.234158 29302 round_trippers.go:580] Audit-Id: 78bdb13b-ed79-4db3-8008-4289bacf78fd
I0914 19:06:05.234172 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.234180 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.234188 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.234195 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.236145 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
I0914 19:06:05.239946 29302 system_pods.go:59] 12 kube-system pods found
I0914 19:06:05.239984 29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0914 19:06:05.239998 29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0914 19:06:05.240008 29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0914 19:06:05.240015 29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
I0914 19:06:05.240026 29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
I0914 19:06:05.240036 29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0914 19:06:05.240054 29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0914 19:06:05.240067 29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
I0914 19:06:05.240073 29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
I0914 19:06:05.240087 29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0914 19:06:05.240101 29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0914 19:06:05.240113 29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0914 19:06:05.240123 29302 system_pods.go:74] duration metric: took 10.263188ms to wait for pod list to return data ...
I0914 19:06:05.240135 29302 node_conditions.go:102] verifying NodePressure condition ...
I0914 19:06:05.240193 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
I0914 19:06:05.240202 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.240212 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.240223 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.245363 29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0914 19:06:05.245382 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.245393 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.245401 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.245416 29302 round_trippers.go:580] Audit-Id: ee9162aa-d308-4bb2-927d-55e7e1011d87
I0914 19:06:05.245424 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.245435 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.245471 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.245800 29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13790 chars]
I0914 19:06:05.246934 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:05.246965 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:05.246982 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:05.246996 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:05.247002 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:05.247012 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:05.247020 29302 node_conditions.go:105] duration metric: took 6.879016ms to run NodePressure ...
I0914 19:06:05.247043 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:06:05.487041 29302 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0914 19:06:05.487069 29302 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0914 19:06:05.487097 29302 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I0914 19:06:05.487490 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
I0914 19:06:05.487506 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.487516 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.487526 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.491797 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:05.491820 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.491831 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.491840 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.491848 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.491857 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.491866 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.491875 29302 round_trippers.go:580] Audit-Id: 9814298e-c189-437e-bfca-dbe0a19423d2
I0914 19:06:05.492280 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29761 chars]
I0914 19:06:05.493221 29302 kubeadm.go:787] kubelet initialised
I0914 19:06:05.493240 29302 kubeadm.go:788] duration metric: took 6.131207ms waiting for restarted kubelet to initialise ...
I0914 19:06:05.493249 29302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:05.493307 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:05.493322 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.493334 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.493347 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.496849 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:05.496867 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.496876 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.496885 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.496892 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.496901 29302 round_trippers.go:580] Audit-Id: a7031aa1-24df-4c90-9e52-85f8f96f783c
I0914 19:06:05.496912 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.496921 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.497873 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
I0914 19:06:05.500273 29302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.500335 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:05.500343 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.500350 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.500356 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.502411 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.502429 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.502441 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.502449 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.502459 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.502469 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.502478 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.502490 29302 round_trippers.go:580] Audit-Id: f347830a-65d2-4cb4-8423-8b8fc5cc870f
I0914 19:06:05.502830 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:05.503304 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.503318 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.503328 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.503337 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.505839 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.505853 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.505864 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.505870 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.505875 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.505880 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.505886 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.505894 29302 round_trippers.go:580] Audit-Id: 71902073-b1b8-4c71-b1d1-af71d48217f1
I0914 19:06:05.506071 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.506467 29302 pod_ready.go:97] node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.506490 29302 pod_ready.go:81] duration metric: took 6.199179ms waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.506501 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.506518 29302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.506572 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:05.506583 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.506593 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.506606 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.508379 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.508391 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.508397 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.508403 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.508408 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.508414 29302 round_trippers.go:580] Audit-Id: adfe03d4-2812-4ba5-98dd-67afaa529395
I0914 19:06:05.508419 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.508425 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.508772 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:05.509094 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.509104 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.509111 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.509116 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.510985 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.511003 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.511012 29302 round_trippers.go:580] Audit-Id: 0ee321ba-916a-449f-a719-2eb1a4973cde
I0914 19:06:05.511019 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.511028 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.511036 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.511044 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.511057 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.511184 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.511454 29302 pod_ready.go:97] node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.511470 29302 pod_ready.go:81] duration metric: took 4.945047ms waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.511477 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.511489 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.511533 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
I0914 19:06:05.511540 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.511546 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.511552 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.513172 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.513189 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.513198 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.513206 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.513213 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.513222 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.513230 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.513246 29302 round_trippers.go:580] Audit-Id: 98886ad5-cb3e-42c1-9236-b75a8e09f5f5
I0914 19:06:05.513380 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"786","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7850 chars]
I0914 19:06:05.513760 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.513773 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.513780 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.513786 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.515437 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.515456 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.515464 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.515472 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.515481 29302 round_trippers.go:580] Audit-Id: cc794f2f-df9b-4b8c-8271-303fbb3bda2a
I0914 19:06:05.515489 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.515502 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.515510 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.515753 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.516001 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.516014 29302 pod_ready.go:81] duration metric: took 4.515313ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.516021 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.516027 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.516066 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
I0914 19:06:05.516073 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.516080 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.516086 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.518245 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.518263 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.518277 29302 round_trippers.go:580] Audit-Id: 6779b7f0-25f9-49d1-be85-87a44d8c3552
I0914 19:06:05.518286 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.518294 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.518301 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.518314 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.518322 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.518564 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"783","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7436 chars]
I0914 19:06:05.630264 29302 request.go:629] Waited for 111.324976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.630352 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.630359 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.630372 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.630382 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.632981 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.633000 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.633006 29302 round_trippers.go:580] Audit-Id: fd7872d6-edd4-429f-97f2-b2ec1c12de54
I0914 19:06:05.633012 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.633017 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.633023 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.633028 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.633036 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.633196 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.633629 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.633656 29302 pod_ready.go:81] duration metric: took 117.619154ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.633669 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.633680 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.830043 29302 request.go:629] Waited for 196.287848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:05.830099 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:05.830103 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.830111 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.830118 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.832762 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.832785 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.832794 29302 round_trippers.go:580] Audit-Id: 3c18be9a-6c71-4025-be83-5fc9c53246a5
I0914 19:06:05.832801 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.832808 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.832815 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.832822 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.832829 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.833118 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0914 19:06:06.029994 29302 request.go:629] Waited for 196.460915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:06.030079 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:06.030087 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.030099 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.030108 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.032502 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.032520 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.032527 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:06.032532 29302 round_trippers.go:580] Audit-Id: 9d3f52cf-02ab-4abb-92c1-8a7d06224f0e
I0914 19:06:06.032538 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.032542 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.032547 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.032553 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.032888 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
I0914 19:06:06.033151 29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:06.033165 29302 pod_ready.go:81] duration metric: took 399.477836ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.033173 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.230655 29302 request.go:629] Waited for 197.428191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:06.230712 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:06.230718 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.230725 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.230733 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.233365 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.233384 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.233391 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.233397 29302 round_trippers.go:580] Audit-Id: 53af8c6b-f3d3-4507-ba18-bcb4d7a95376
I0914 19:06:06.233406 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.233422 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.233431 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.233443 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.233771 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
I0914 19:06:06.430710 29302 request.go:629] Waited for 196.348215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:06.430762 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:06.430769 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.430779 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.430788 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.433906 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:06.433930 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.433942 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.433951 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.433960 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.433969 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.433985 29302 round_trippers.go:580] Audit-Id: 1280bf02-d81c-4bca-b4e5-275129840268
I0914 19:06:06.433994 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.434112 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"772","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3204 chars]
I0914 19:06:06.434453 29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:06.434474 29302 pod_ready.go:81] duration metric: took 401.294532ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.434488 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.630939 29302 request.go:629] Waited for 196.385647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:06.631022 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:06.631030 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.631042 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.631051 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.633497 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.633520 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.633530 29302 round_trippers.go:580] Audit-Id: 1dc1f940-384d-494a-8e64-361f1ad205ba
I0914 19:06:06.633543 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.633552 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.633562 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.633573 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.633584 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.633766 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"788","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5928 chars]
I0914 19:06:06.830679 29302 request.go:629] Waited for 196.393813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:06.830735 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:06.830740 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.830747 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.830754 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.833354 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.833375 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.833382 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.833387 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.833392 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.833397 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.833402 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.833407 29302 round_trippers.go:580] Audit-Id: a24b66f4-fa51-4df4-9bc5-590f310c8108
I0914 19:06:06.833985 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:06.834382 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:06.834408 29302 pod_ready.go:81] duration metric: took 399.910926ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
E0914 19:06:06.834420 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:06.834433 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:07.030857 29302 request.go:629] Waited for 196.352242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:07.030940 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:07.030951 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.030964 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.030977 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.034225 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:07.034245 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.034253 29302 round_trippers.go:580] Audit-Id: 71cfae50-3c69-4f2b-8709-aad710c8dec2
I0914 19:06:07.034260 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.034268 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.034276 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.034289 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.034298 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:07.034501 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:07.230128 29302 request.go:629] Waited for 195.265564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.230211 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.230221 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.230229 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.230235 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.233612 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:07.233631 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.233641 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.233648 29302 round_trippers.go:580] Audit-Id: c6e16c92-92f1-4f61-b0d2-523db2c467d1
I0914 19:06:07.233656 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.233665 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.233675 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.233684 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.234058 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:07.234344 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:07.234368 29302 pod_ready.go:81] duration metric: took 399.923264ms waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:07.234381 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:07.234393 29302 pod_ready.go:38] duration metric: took 1.741133779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:07.234417 29302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0914 19:06:07.250231 29302 command_runner.go:130] > -16
I0914 19:06:07.250255 29302 ops.go:34] apiserver oom_adj: -16
I0914 19:06:07.250263 29302 kubeadm.go:640] restartCluster took 21.909989817s
I0914 19:06:07.250271 29302 kubeadm.go:406] StartCluster complete in 21.938026901s
I0914 19:06:07.250290 29302 settings.go:142] acquiring lock: {Name:mkaf2d84e9fceec2029b98353d3d8cae1b369e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:06:07.250389 29302 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:06:07.251059 29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:06:07.251279 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0914 19:06:07.251383 29302 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0914 19:06:07.253531 29302 out.go:177] * Enabled addons:
I0914 19:06:07.251517 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:06:07.251534 29302 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:06:07.255467 29302 addons.go:502] enable addons completed in 4.093858ms: enabled=[]
I0914 19:06:07.255670 29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0914 19:06:07.255997 29302 round_trippers.go:463] GET https://192.168.39.14:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0914 19:06:07.256010 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.256017 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.256025 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.263309 29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0914 19:06:07.263329 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.263340 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.263348 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.263354 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.263359 29302 round_trippers.go:580] Content-Length: 291
I0914 19:06:07.263365 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.263370 29302 round_trippers.go:580] Audit-Id: 5a75d744-b3cd-40e6-abf4-7b1c8daac075
I0914 19:06:07.263377 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.263397 29302 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9776e459-4280-488a-924c-4e921bbd9495","resourceVersion":"796","creationTimestamp":"2023-09-14T19:01:40Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0914 19:06:07.263508 29302 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-040952" context rescaled to 1 replicas
I0914 19:06:07.263529 29302 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0914 19:06:07.264985 29302 out.go:177] * Verifying Kubernetes components...
I0914 19:06:07.266359 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0914 19:06:07.389385 29302 command_runner.go:130] > apiVersion: v1
I0914 19:06:07.389403 29302 command_runner.go:130] > data:
I0914 19:06:07.389408 29302 command_runner.go:130] > Corefile: |
I0914 19:06:07.389411 29302 command_runner.go:130] > .:53 {
I0914 19:06:07.389415 29302 command_runner.go:130] > log
I0914 19:06:07.389421 29302 command_runner.go:130] > errors
I0914 19:06:07.389425 29302 command_runner.go:130] > health {
I0914 19:06:07.389429 29302 command_runner.go:130] > lameduck 5s
I0914 19:06:07.389433 29302 command_runner.go:130] > }
I0914 19:06:07.389437 29302 command_runner.go:130] > ready
I0914 19:06:07.389443 29302 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0914 19:06:07.389447 29302 command_runner.go:130] > pods insecure
I0914 19:06:07.389455 29302 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0914 19:06:07.389473 29302 command_runner.go:130] > ttl 30
I0914 19:06:07.389477 29302 command_runner.go:130] > }
I0914 19:06:07.389483 29302 command_runner.go:130] > prometheus :9153
I0914 19:06:07.389487 29302 command_runner.go:130] > hosts {
I0914 19:06:07.389493 29302 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I0914 19:06:07.389497 29302 command_runner.go:130] > fallthrough
I0914 19:06:07.389501 29302 command_runner.go:130] > }
I0914 19:06:07.389508 29302 command_runner.go:130] > forward . /etc/resolv.conf {
I0914 19:06:07.389513 29302 command_runner.go:130] > max_concurrent 1000
I0914 19:06:07.389517 29302 command_runner.go:130] > }
I0914 19:06:07.389520 29302 command_runner.go:130] > cache 30
I0914 19:06:07.389527 29302 command_runner.go:130] > loop
I0914 19:06:07.389532 29302 command_runner.go:130] > reload
I0914 19:06:07.389541 29302 command_runner.go:130] > loadbalance
I0914 19:06:07.389549 29302 command_runner.go:130] > }
I0914 19:06:07.389558 29302 command_runner.go:130] > kind: ConfigMap
I0914 19:06:07.389564 29302 command_runner.go:130] > metadata:
I0914 19:06:07.389573 29302 command_runner.go:130] > creationTimestamp: "2023-09-14T19:01:40Z"
I0914 19:06:07.389585 29302 command_runner.go:130] > name: coredns
I0914 19:06:07.389594 29302 command_runner.go:130] > namespace: kube-system
I0914 19:06:07.389604 29302 command_runner.go:130] > resourceVersion: "404"
I0914 19:06:07.389612 29302 command_runner.go:130] > uid: 77b79b35-a304-4075-b4c4-6b8a52cfe75c
I0914 19:06:07.389643 29302 node_ready.go:35] waiting up to 6m0s for node "multinode-040952" to be "Ready" ...
I0914 19:06:07.389797 29302 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0914 19:06:07.431021 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.431047 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.431059 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.431069 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.434336 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:07.434359 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.434367 29302 round_trippers.go:580] Audit-Id: f0218504-ef8b-4fee-a836-3f16c97e6d1d
I0914 19:06:07.434372 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.434378 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.434383 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.434389 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.434399 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.434888 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:07.630657 29302 request.go:629] Waited for 195.358734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.630713 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.630720 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.630729 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.630738 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.635002 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:07.635021 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.635027 29302 round_trippers.go:580] Audit-Id: 0e51cba7-34eb-44c3-be48-8785725a128f
I0914 19:06:07.635033 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.635038 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.635043 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.635048 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.635053 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.635788 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:08.136884 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:08.136903 29302 round_trippers.go:469] Request Headers:
I0914 19:06:08.136913 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:08.136919 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:08.140137 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:08.140160 29302 round_trippers.go:577] Response Headers:
I0914 19:06:08.140168 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:08 GMT
I0914 19:06:08.140173 29302 round_trippers.go:580] Audit-Id: 9ec77217-1afd-42b6-aaf7-211e85629e48
I0914 19:06:08.140179 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:08.140184 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:08.140189 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:08.140194 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:08.140344 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:08.637040 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:08.637079 29302 round_trippers.go:469] Request Headers:
I0914 19:06:08.637091 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:08.637101 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:08.639714 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:08.639733 29302 round_trippers.go:577] Response Headers:
I0914 19:06:08.639744 29302 round_trippers.go:580] Audit-Id: d47f9fd4-8dec-46b1-8ce9-436c0350c5ca
I0914 19:06:08.639752 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:08.639760 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:08.639769 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:08.639779 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:08.639788 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:08 GMT
I0914 19:06:08.640112 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:09.136649 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:09.136682 29302 round_trippers.go:469] Request Headers:
I0914 19:06:09.136690 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:09.136696 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:09.139686 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:09.139704 29302 round_trippers.go:577] Response Headers:
I0914 19:06:09.139715 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:09.139724 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:09.139733 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:09.139739 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:09 GMT
I0914 19:06:09.139745 29302 round_trippers.go:580] Audit-Id: ae97ecdc-ac59-4df9-80fb-ab01ff2852ec
I0914 19:06:09.139750 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:09.140167 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:09.636845 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:09.636866 29302 round_trippers.go:469] Request Headers:
I0914 19:06:09.636874 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:09.636880 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:09.639508 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:09.639525 29302 round_trippers.go:577] Response Headers:
I0914 19:06:09.639534 29302 round_trippers.go:580] Audit-Id: 2a2efe7f-361b-45a2-b3cb-a7e9e84043e9
I0914 19:06:09.639541 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:09.639549 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:09.639558 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:09.639568 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:09.639578 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:09 GMT
I0914 19:06:09.639997 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:09.640405 29302 node_ready.go:58] node "multinode-040952" has status "Ready":"False"
I0914 19:06:10.136599 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.136624 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.136638 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.136648 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.140273 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:10.140297 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.140306 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.140313 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.140320 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.140332 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.140340 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.140347 29302 round_trippers.go:580] Audit-Id: 1af6dc6d-a25f-4a81-86a3-d239224c606e
I0914 19:06:10.140506 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:10.140798 29302 node_ready.go:49] node "multinode-040952" has status "Ready":"True"
I0914 19:06:10.140815 29302 node_ready.go:38] duration metric: took 2.751153874s waiting for node "multinode-040952" to be "Ready" ...
I0914 19:06:10.140825 29302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:10.140877 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:10.140887 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.140897 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.140907 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.145518 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:10.145535 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.145542 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.145547 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.145557 29302 round_trippers.go:580] Audit-Id: d738ec8e-27bb-4210-8329-89e64df5055c
I0914 19:06:10.145569 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.145579 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.145590 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.146881 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83954 chars]
I0914 19:06:10.149263 29302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
I0914 19:06:10.149331 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:10.149342 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.149353 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.149364 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.151221 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:10.151235 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.151241 29302 round_trippers.go:580] Audit-Id: 9dce5aa8-17a9-43c4-9448-421e8ef000fe
I0914 19:06:10.151247 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.151255 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.151264 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.151281 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.151288 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.151447 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:10.151815 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.151829 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.151839 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.151847 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.154035 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:10.154047 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.154053 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.154058 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.154063 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.154069 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.154075 29302 round_trippers.go:580] Audit-Id: f451201e-e118-40ff-8809-e06aa3aa8567
I0914 19:06:10.154084 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.154352 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:10.154718 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:10.154731 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.154742 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.154752 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.156468 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:10.156482 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.156491 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.156501 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.156513 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.156524 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.156538 29302 round_trippers.go:580] Audit-Id: 056aca82-7d21-4539-9de8-316f54300fbb
I0914 19:06:10.156548 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.156671 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:10.157120 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.157136 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.157147 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.157162 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.159000 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:10.159014 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.159023 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.159031 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.159039 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.159049 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.159059 29302 round_trippers.go:580] Audit-Id: 053f7e6a-3d64-496b-a692-e6d8d7de77dc
I0914 19:06:10.159074 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.159292 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:10.660315 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:10.660343 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.660354 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.660364 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.662669 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:10.662688 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.662694 29302 round_trippers.go:580] Audit-Id: 0b5959bf-4f92-40f5-bff0-64259ee8d0e9
I0914 19:06:10.662703 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.662711 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.662723 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.662732 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.662744 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.663162 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:10.663793 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.663810 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.663822 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.663830 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.667280 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:10.667294 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.667299 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.667304 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.667310 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.667315 29302 round_trippers.go:580] Audit-Id: adc471fd-2452-48eb-9634-4a15a4129e27
I0914 19:06:10.667320 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.667325 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.667519 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:11.160702 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:11.160731 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.160744 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.160753 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.164208 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:11.164227 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.164234 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.164240 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.164261 29302 round_trippers.go:580] Audit-Id: 3b81510c-ceb9-488e-bc2e-b21d77b051e2
I0914 19:06:11.164273 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.164281 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.164290 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.164555 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:11.165152 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:11.165174 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.165187 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.165197 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.168098 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:11.168117 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.168125 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.168133 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.168142 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.168151 29302 round_trippers.go:580] Audit-Id: 15145bd3-b367-4e99-b3ce-0ae58ef5c733
I0914 19:06:11.168161 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.168168 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.168530 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:11.660168 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:11.660193 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.660205 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.660216 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.663403 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:11.663424 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.663434 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.663442 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.663449 29302 round_trippers.go:580] Audit-Id: 3362ce2b-8605-45fd-8885-3eaeb408ef56
I0914 19:06:11.663457 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.663466 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.663476 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.664334 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:11.664760 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:11.664775 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.664785 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.664795 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.671505 29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0914 19:06:11.671522 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.671530 29302 round_trippers.go:580] Audit-Id: 654293a2-0981-4bec-9543-4726a90c72a3
I0914 19:06:11.671539 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.671551 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.671560 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.671567 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.671576 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.671723 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:12.160486 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:12.160512 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.160524 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.160534 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.163604 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:12.163624 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.163634 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.163644 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.163652 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.163661 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.163674 29302 round_trippers.go:580] Audit-Id: 746f41fe-b54a-4602-ba74-6665d07e9fc7
I0914 19:06:12.163683 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.164257 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:12.164698 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:12.164712 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.164721 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.164731 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.166907 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:12.166920 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.166926 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.166934 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.166942 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.166953 29302 round_trippers.go:580] Audit-Id: e83a6e6d-40cb-4779-8c0a-8f5c050ff286
I0914 19:06:12.166961 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.166970 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.167376 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:12.167641 29302 pod_ready.go:102] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"False"
I0914 19:06:12.660012 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:12.660034 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.660051 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.660059 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.664300 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:12.664327 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.664338 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.664345 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.664352 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.664360 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.664369 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.664384 29302 round_trippers.go:580] Audit-Id: 49e3af30-584c-4ef5-942f-2f32701b7bc7
I0914 19:06:12.665270 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:12.665705 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:12.665719 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.665729 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.665738 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.668068 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:12.668088 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.668097 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.668105 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.668112 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.668120 29302 round_trippers.go:580] Audit-Id: 28f046b6-f759-4197-80f7-730e48f958ff
I0914 19:06:12.668128 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.668142 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.668260 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.159876 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:13.159904 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.159912 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.159918 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.163892 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:13.163917 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.163928 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.163937 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.163944 29302 round_trippers.go:580] Audit-Id: 2bafd162-6571-48ef-8c6f-4b72770d2047
I0914 19:06:13.163952 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.163966 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.163976 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.165138 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
I0914 19:06:13.165753 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.165771 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.165782 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.165791 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.168088 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.168105 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.168112 29302 round_trippers.go:580] Audit-Id: 767659c2-2c07-4c69-b006-9d19ff6d9f6d
I0914 19:06:13.168118 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.168123 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.168128 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.168135 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.168143 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.168401 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.168681 29302 pod_ready.go:92] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:13.168695 29302 pod_ready.go:81] duration metric: took 3.01941396s waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
I0914 19:06:13.168703 29302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:13.168801 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:13.168814 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.168832 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.168846 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.171347 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.171368 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.171375 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.171380 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.171388 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.171397 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.171404 29302 round_trippers.go:580] Audit-Id: b18d0768-dc31-460c-beed-e50e3a19d6cf
I0914 19:06:13.171411 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.172044 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:13.172379 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.172391 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.172399 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.172405 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.175143 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.175157 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.175163 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.175168 29302 round_trippers.go:580] Audit-Id: f6242de5-c366-4c79-aa4f-5b2c5ce0d01e
I0914 19:06:13.175174 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.175182 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.175190 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.175200 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.176009 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.176284 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:13.176295 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.176301 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.176307 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.178355 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.178376 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.178382 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.178387 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.178393 29302 round_trippers.go:580] Audit-Id: 8172c157-f43e-42e0-b3a6-8cbd28c89432
I0914 19:06:13.178401 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.178409 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.178417 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.178832 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:13.179275 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.179292 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.179302 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.179309 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.180983 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:13.180994 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.180999 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.181004 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.181009 29302 round_trippers.go:580] Audit-Id: 7d797daa-6bd3-4f35-8046-01886aa5fa4e
I0914 19:06:13.181014 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.181019 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.181024 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.181219 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.682300 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:13.682333 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.682342 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.682347 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.685143 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.685160 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.685166 29302 round_trippers.go:580] Audit-Id: 0910f73d-781a-443b-b8e1-0d453e50ba92
I0914 19:06:13.685172 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.685177 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.685182 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.685187 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.685192 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.685503 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:13.685920 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.685934 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.685941 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.685947 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.688227 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.688240 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.688246 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.688252 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.688260 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.688268 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.688281 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.688288 29302 round_trippers.go:580] Audit-Id: 078b7d2a-29bc-4729-9a02-7236c4049ad7
I0914 19:06:13.688474 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.182102 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:14.182125 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.182133 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.182140 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.187517 29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0914 19:06:14.187544 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.187554 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.187562 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.187569 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.187577 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.187586 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.187594 29302 round_trippers.go:580] Audit-Id: dd780464-2280-4b93-b398-b175b603d0fe
I0914 19:06:14.188035 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:14.188554 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.188572 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.188583 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.188592 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.190606 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:14.190620 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.190626 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.190632 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.190637 29302 round_trippers.go:580] Audit-Id: 104efd51-1025-4755-af8b-f207cfcdb912
I0914 19:06:14.190642 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.190647 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.190652 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.190979 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.682687 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:14.682711 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.682719 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.682725 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.690728 29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0914 19:06:14.690764 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.690775 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.690783 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.690791 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.690799 29302 round_trippers.go:580] Audit-Id: 4dc518a5-6cbd-4561-8ed6-e72b82b2abda
I0914 19:06:14.690806 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.690814 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.690995 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"887","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6071 chars]
I0914 19:06:14.691406 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.691420 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.691427 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.691433 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.697743 29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0914 19:06:14.697765 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.697774 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.697779 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.697784 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.697789 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.697794 29302 round_trippers.go:580] Audit-Id: 07d3511e-72f3-415a-b985-0c38f9c2dc48
I0914 19:06:14.697799 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.698080 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.698416 29302 pod_ready.go:92] pod "etcd-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:14.698432 29302 pod_ready.go:81] duration metric: took 1.529723471s waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.698448 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.698508 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
I0914 19:06:14.698517 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.698524 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.698530 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.703391 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:14.703406 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.703412 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.703418 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.703423 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.703428 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.703433 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.703439 29302 round_trippers.go:580] Audit-Id: 0b9ff4df-c192-426d-837d-19a8ddc6d994
I0914 19:06:14.703718 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"871","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7606 chars]
I0914 19:06:14.704127 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.704140 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.704147 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.704153 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.706425 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:14.706444 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.706451 29302 round_trippers.go:580] Audit-Id: 6eee19bb-2b91-4350-b2ae-7edfbd41930d
I0914 19:06:14.706457 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.706462 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.706467 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.706472 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.706478 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.706615 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.706908 29302 pod_ready.go:92] pod "kube-apiserver-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:14.706921 29302 pod_ready.go:81] duration metric: took 8.465952ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.706930 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.706986 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
I0914 19:06:14.706996 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.707007 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.707017 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.710085 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:14.710105 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.710115 29302 round_trippers.go:580] Audit-Id: 37a4af49-de22-42c5-8342-96bdccfba829
I0914 19:06:14.710126 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.710135 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.710143 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.710152 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.710160 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.710726 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"884","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7174 chars]
I0914 19:06:14.830503 29302 request.go:629] Waited for 119.282235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.830554 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.830558 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.830566 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.830572 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.833064 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:14.833083 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.833090 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.833095 29302 round_trippers.go:580] Audit-Id: 7a8584d4-7b4d-4f0c-a673-2711303dfb2c
I0914 19:06:14.833100 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.833106 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.833110 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.833116 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.833241 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.833562 29302 pod_ready.go:92] pod "kube-controller-manager-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:14.833577 29302 pod_ready.go:81] duration metric: took 126.641384ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.833587 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.030888 29302 request.go:629] Waited for 197.237265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:15.030946 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:15.030951 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.030960 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.030966 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.034339 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.034359 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.034366 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.034374 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.034386 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.034394 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.034408 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:15.034416 29302 round_trippers.go:580] Audit-Id: 3c39cfc6-1f06-4726-9679-50e437a9b84d
I0914 19:06:15.034690 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0914 19:06:15.230480 29302 request.go:629] Waited for 195.333524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:15.230552 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:15.230557 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.230565 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.230574 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.234304 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.234329 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.234339 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.234347 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.234359 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.234366 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.234377 29302 round_trippers.go:580] Audit-Id: 4a324e73-8fa1-482f-bde6-ae80be99f721
I0914 19:06:15.234386 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.234528 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
I0914 19:06:15.234774 29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:15.234787 29302 pod_ready.go:81] duration metric: took 401.195035ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.234796 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.430003 29302 request.go:629] Waited for 195.152769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:15.430096 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:15.430104 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.430118 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.430142 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.433237 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.433271 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.433281 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.433290 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.433300 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.433309 29302 round_trippers.go:580] Audit-Id: 92d372f9-e9c9-4d13-8b75-1b3ebd7f2435
I0914 19:06:15.433321 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.433329 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.433627 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
I0914 19:06:15.630434 29302 request.go:629] Waited for 196.369841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:15.630534 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:15.630546 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.630557 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.630568 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.633799 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.633824 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.633834 29302 round_trippers.go:580] Audit-Id: 8ea32575-14e9-412a-ba38-fd00269447f5
I0914 19:06:15.633844 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.633852 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.633864 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.633873 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.633887 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.634144 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"891","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3084 chars]
I0914 19:06:15.634401 29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:15.634416 29302 pod_ready.go:81] duration metric: took 399.614214ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.634430 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.830846 29302 request.go:629] Waited for 196.353294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:15.830928 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:15.830933 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.830945 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.830952 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.834221 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.834246 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.834259 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.834267 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.834274 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.834282 29302 round_trippers.go:580] Audit-Id: 44182567-ce38-4fce-a842-f78410d89ee9
I0914 19:06:15.834289 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.834298 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.834802 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"798","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
I0914 19:06:16.030675 29302 request.go:629] Waited for 195.45562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.030731 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.030736 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.030743 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.030750 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.034236 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:16.034260 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.034267 29302 round_trippers.go:580] Audit-Id: e468604d-7ce9-469a-b812-ed3c9c650d6e
I0914 19:06:16.034275 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.034281 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.034286 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.034291 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.034297 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.034614 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:16.034941 29302 pod_ready.go:92] pod "kube-proxy-hbsmt" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:16.034956 29302 pod_ready.go:81] duration metric: took 400.519289ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
I0914 19:06:16.034964 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:16.230342 29302 request.go:629] Waited for 195.324407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.230449 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.230454 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.230462 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.230470 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.233547 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:16.233564 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.233572 29302 round_trippers.go:580] Audit-Id: 224fde99-6866-4d6c-81fe-2f97bc0c6734
I0914 19:06:16.233577 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.233587 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.233592 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.233597 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.233602 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.233823 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:16.430509 29302 request.go:629] Waited for 196.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.430573 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.430580 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.430590 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.430600 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.433517 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:16.433535 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.433542 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.433559 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.433565 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.433571 29302 round_trippers.go:580] Audit-Id: 1da1d693-84a7-4480-b07f-7a386588f044
I0914 19:06:16.433576 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.433581 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.433983 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:16.630679 29302 request.go:629] Waited for 196.348452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.630764 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.630769 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.630776 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.630783 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.633557 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:16.633575 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.633582 29302 round_trippers.go:580] Audit-Id: 2136e32a-148d-4e1d-825d-95e56e17f7f3
I0914 19:06:16.633589 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.633597 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.633605 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.633612 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.633629 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.634402 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:16.830072 29302 request.go:629] Waited for 195.313935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.830145 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.830152 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.830160 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.830168 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.832962 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:16.832981 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.832988 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.832993 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.832998 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.833006 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.833011 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.833016 29302 round_trippers.go:580] Audit-Id: 685468aa-007f-4cd0-908f-286f4b9b8738
I0914 19:06:16.833566 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:17.334599 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:17.334622 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.334645 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.334652 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.337790 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:17.337810 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.337817 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.337823 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.337828 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.337835 29302 round_trippers.go:580] Audit-Id: 13885e51-e7a2-41bd-a4e6-27c1810b7f5b
I0914 19:06:17.337843 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.337850 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.338071 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:17.338439 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:17.338455 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.338465 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.338474 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.340824 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:17.340837 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.340843 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.340848 29302 round_trippers.go:580] Audit-Id: e2df7950-3f43-43ac-a2ff-9ebcb6aba048
I0914 19:06:17.340854 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.340862 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.340871 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.340883 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.341277 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:17.834981 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:17.835006 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.835015 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.835021 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.837948 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:17.837973 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.837984 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.837992 29302 round_trippers.go:580] Audit-Id: bf96bd3c-445d-4267-b684-9a852b7ce0ca
I0914 19:06:17.838000 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.838008 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.838020 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.838027 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.838816 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:17.839223 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:17.839236 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.839244 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.839250 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.842020 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:17.842042 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.842052 29302 round_trippers.go:580] Audit-Id: 58f6c61f-2107-4d49-bc25-beaf577ebc0b
I0914 19:06:17.842063 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.842073 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.842084 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.842094 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.842104 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.842191 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:18.334912 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:18.334936 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.334944 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.334950 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.337727 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:18.337753 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.337763 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.337772 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.337784 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.337793 29302 round_trippers.go:580] Audit-Id: 91452a7a-9433-48f7-bb48-08448530a97b
I0914 19:06:18.337804 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.337811 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.338243 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"894","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4904 chars]
I0914 19:06:18.338636 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:18.338654 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.338664 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.338674 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.342026 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:18.342059 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.342068 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.342078 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.342085 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.342096 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.342104 29302 round_trippers.go:580] Audit-Id: a5dad678-33fe-4c2f-a5f5-c10a6380266e
I0914 19:06:18.342118 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.342444 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:18.342720 29302 pod_ready.go:92] pod "kube-scheduler-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:18.342732 29302 pod_ready.go:81] duration metric: took 2.30776305s waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:18.342741 29302 pod_ready.go:38] duration metric: took 8.201906021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:18.342758 29302 api_server.go:52] waiting for apiserver process to appear ...
I0914 19:06:18.342802 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:06:18.356335 29302 command_runner.go:130] > 1693
I0914 19:06:18.356824 29302 api_server.go:72] duration metric: took 11.093271286s to wait for apiserver process to appear ...
I0914 19:06:18.356842 29302 api_server.go:88] waiting for apiserver healthz status ...
I0914 19:06:18.356862 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:18.362653 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
ok
I0914 19:06:18.362710 29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
I0914 19:06:18.362717 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.362725 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.362731 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.363650 29302 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0914 19:06:18.363667 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.363677 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.363686 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.363694 29302 round_trippers.go:580] Content-Length: 263
I0914 19:06:18.363711 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.363719 29302 round_trippers.go:580] Audit-Id: 01d336c4-24b2-4b6e-a634-c932a4f80f56
I0914 19:06:18.363728 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.363733 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.363748 29302 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.1",
"gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
"gitTreeState": "clean",
"buildDate": "2023-08-24T11:16:30Z",
"goVersion": "go1.20.7",
"compiler": "gc",
"platform": "linux/amd64"
}
I0914 19:06:18.363790 29302 api_server.go:141] control plane version: v1.28.1
I0914 19:06:18.363805 29302 api_server.go:131] duration metric: took 6.957442ms to wait for apiserver health ...
I0914 19:06:18.363814 29302 system_pods.go:43] waiting for kube-system pods to appear ...
I0914 19:06:18.363875 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:18.363883 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.363889 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.363900 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.367955 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:18.367989 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.367997 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.368005 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.368013 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.368025 29302 round_trippers.go:580] Audit-Id: 4a4def47-e1cc-4f97-a173-69327418d154
I0914 19:06:18.368035 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.368044 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.369884 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
I0914 19:06:18.373265 29302 system_pods.go:59] 12 kube-system pods found
I0914 19:06:18.373287 29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
I0914 19:06:18.373292 29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
I0914 19:06:18.373296 29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
I0914 19:06:18.373299 29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
I0914 19:06:18.373303 29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
I0914 19:06:18.373307 29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
I0914 19:06:18.373312 29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
I0914 19:06:18.373315 29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
I0914 19:06:18.373326 29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
I0914 19:06:18.373335 29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
I0914 19:06:18.373339 29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
I0914 19:06:18.373342 29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
I0914 19:06:18.373347 29302 system_pods.go:74] duration metric: took 9.528517ms to wait for pod list to return data ...
I0914 19:06:18.373355 29302 default_sa.go:34] waiting for default service account to be created ...
I0914 19:06:18.430623 29302 request.go:629] Waited for 57.191118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
I0914 19:06:18.430678 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
I0914 19:06:18.430682 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.430689 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.430695 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.433750 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:18.433768 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.433775 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.433780 29302 round_trippers.go:580] Content-Length: 261
I0914 19:06:18.433785 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.433790 29302 round_trippers.go:580] Audit-Id: f58f454f-de35-4fde-b782-3e31600d0a05
I0914 19:06:18.433795 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.433803 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.433808 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.433825 29302 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"751abfd7-43aa-4bf5-a223-71659884f01c","resourceVersion":"335","creationTimestamp":"2023-09-14T19:01:53Z"}}]}
I0914 19:06:18.433967 29302 default_sa.go:45] found service account: "default"
I0914 19:06:18.433981 29302 default_sa.go:55] duration metric: took 60.621039ms for default service account to be created ...
I0914 19:06:18.433987 29302 system_pods.go:116] waiting for k8s-apps to be running ...
I0914 19:06:18.630408 29302 request.go:629] Waited for 196.359387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:18.630467 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:18.630472 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.630480 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.630486 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.635088 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:18.635116 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.635126 29302 round_trippers.go:580] Audit-Id: 40dbf5e6-bdfd-4c25-924c-528834eef0a7
I0914 19:06:18.635135 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.635142 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.635150 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.635159 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.635173 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.636346 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
I0914 19:06:18.639989 29302 system_pods.go:86] 12 kube-system pods found
I0914 19:06:18.640017 29302 system_pods.go:89] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
I0914 19:06:18.640024 29302 system_pods.go:89] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
I0914 19:06:18.640031 29302 system_pods.go:89] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
I0914 19:06:18.640037 29302 system_pods.go:89] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
I0914 19:06:18.640043 29302 system_pods.go:89] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
I0914 19:06:18.640050 29302 system_pods.go:89] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
I0914 19:06:18.640058 29302 system_pods.go:89] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
I0914 19:06:18.640064 29302 system_pods.go:89] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
I0914 19:06:18.640071 29302 system_pods.go:89] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
I0914 19:06:18.640080 29302 system_pods.go:89] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
I0914 19:06:18.640088 29302 system_pods.go:89] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
I0914 19:06:18.640095 29302 system_pods.go:89] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
I0914 19:06:18.640110 29302 system_pods.go:126] duration metric: took 206.118337ms to wait for k8s-apps to be running ...
I0914 19:06:18.640118 29302 system_svc.go:44] waiting for kubelet service to be running ....
I0914 19:06:18.640169 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0914 19:06:18.654395 29302 system_svc.go:56] duration metric: took 14.272365ms WaitForService to wait for kubelet.
I0914 19:06:18.654416 29302 kubeadm.go:581] duration metric: took 11.390867757s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0914 19:06:18.654443 29302 node_conditions.go:102] verifying NodePressure condition ...
I0914 19:06:18.830833 29302 request.go:629] Waited for 176.33044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
I0914 19:06:18.830908 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
I0914 19:06:18.830915 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.830925 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.830934 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.833992 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:18.834011 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.834020 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.834029 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.834038 29302 round_trippers.go:580] Audit-Id: 78eec727-aee2-400e-8c95-4146a9496a91
I0914 19:06:18.834047 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.834056 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.834064 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.834284 29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
I0914 19:06:18.835016 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:18.835038 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:18.835048 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:18.835052 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:18.835058 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:18.835067 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:18.835073 29302 node_conditions.go:105] duration metric: took 180.624501ms to run NodePressure ...
I0914 19:06:18.835093 29302 start.go:228] waiting for startup goroutines ...
I0914 19:06:18.835102 29302 start.go:233] waiting for cluster config update ...
I0914 19:06:18.835115 29302 start.go:242] writing updated cluster config ...
I0914 19:06:18.835683 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:06:18.835796 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:06:18.838910 29302 out.go:177] * Starting worker node multinode-040952-m02 in cluster multinode-040952
I0914 19:06:18.840147 29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0914 19:06:18.840163 29302 cache.go:57] Caching tarball of preloaded images
I0914 19:06:18.840249 29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0914 19:06:18.840261 29302 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0914 19:06:18.840334 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:06:18.840476 29302 start.go:365] acquiring machines lock for multinode-040952-m02: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 19:06:18.840512 29302 start.go:369] acquired machines lock for "multinode-040952-m02" in 19.707µs
I0914 19:06:18.840566 29302 start.go:96] Skipping create...Using existing machine configuration
I0914 19:06:18.840575 29302 fix.go:54] fixHost starting: m02
I0914 19:06:18.840830 29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 19:06:18.840857 29302 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 19:06:18.855469 29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
I0914 19:06:18.855890 29302 main.go:141] libmachine: () Calling .GetVersion
I0914 19:06:18.856329 29302 main.go:141] libmachine: Using API Version 1
I0914 19:06:18.856352 29302 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 19:06:18.856677 29302 main.go:141] libmachine: () Calling .GetMachineName
I0914 19:06:18.856891 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:18.857065 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
I0914 19:06:18.858712 29302 fix.go:102] recreateIfNeeded on multinode-040952-m02: state=Stopped err=<nil>
I0914 19:06:18.858735 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
W0914 19:06:18.858914 29302 fix.go:128] unexpected machine state, will restart: <nil>
I0914 19:06:18.861118 29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952-m02" ...
I0914 19:06:18.862649 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .Start
I0914 19:06:18.862832 29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring networks are active...
I0914 19:06:18.863554 29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network default is active
I0914 19:06:18.863887 29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network mk-multinode-040952 is active
I0914 19:06:18.864247 29302 main.go:141] libmachine: (multinode-040952-m02) Getting domain xml...
I0914 19:06:18.864791 29302 main.go:141] libmachine: (multinode-040952-m02) Creating domain...
I0914 19:06:20.114677 29302 main.go:141] libmachine: (multinode-040952-m02) Waiting to get IP...
I0914 19:06:20.115697 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:20.116116 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:20.116177 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.116093 29537 retry.go:31] will retry after 292.793167ms: waiting for machine to come up
I0914 19:06:20.410624 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:20.411041 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:20.411062 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.411011 29537 retry.go:31] will retry after 329.185161ms: waiting for machine to come up
I0914 19:06:20.741486 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:20.741956 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:20.741984 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.741922 29537 retry.go:31] will retry after 372.179082ms: waiting for machine to come up
I0914 19:06:21.115108 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:21.115492 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:21.115522 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.115446 29537 retry.go:31] will retry after 552.546331ms: waiting for machine to come up
I0914 19:06:21.669165 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:21.669673 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:21.669702 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.669630 29537 retry.go:31] will retry after 641.98724ms: waiting for machine to come up
I0914 19:06:22.313770 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:22.314305 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:22.314344 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:22.314258 29537 retry.go:31] will retry after 792.672163ms: waiting for machine to come up
I0914 19:06:23.108201 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:23.108628 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:23.108656 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.108582 29537 retry.go:31] will retry after 820.609535ms: waiting for machine to come up
I0914 19:06:23.930887 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:23.931350 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:23.931383 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.931293 29537 retry.go:31] will retry after 933.919914ms: waiting for machine to come up
I0914 19:06:24.866306 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:24.866762 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:24.866796 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:24.866720 29537 retry.go:31] will retry after 1.175445783s: waiting for machine to come up
I0914 19:06:26.044181 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:26.044639 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:26.044674 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:26.044595 29537 retry.go:31] will retry after 1.659114662s: waiting for machine to come up
I0914 19:06:27.705347 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:27.705796 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:27.705832 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:27.705738 29537 retry.go:31] will retry after 2.838813162s: waiting for machine to come up
I0914 19:06:30.546592 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:30.547049 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:30.547092 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:30.547042 29537 retry.go:31] will retry after 2.43743272s: waiting for machine to come up
I0914 19:06:32.987818 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:32.988277 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:32.988300 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:32.988246 29537 retry.go:31] will retry after 4.479558003s: waiting for machine to come up
I0914 19:06:37.471961 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.472352 29302 main.go:141] libmachine: (multinode-040952-m02) Found IP for machine: 192.168.39.16
I0914 19:06:37.472379 29302 main.go:141] libmachine: (multinode-040952-m02) Reserving static IP address...
I0914 19:06:37.472392 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.472813 29302 main.go:141] libmachine: (multinode-040952-m02) Reserved static IP address: 192.168.39.16
I0914 19:06:37.472867 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.472882 29302 main.go:141] libmachine: (multinode-040952-m02) Waiting for SSH to be available...
I0914 19:06:37.472912 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"}
I0914 19:06:37.472930 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Getting to WaitForSSH function...
I0914 19:06:37.474853 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.475216 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.475243 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.475331 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH client type: external
I0914 19:06:37.475371 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa (-rw-------)
I0914 19:06:37.475423 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0914 19:06:37.475447 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | About to run SSH command:
I0914 19:06:37.475460 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | exit 0
I0914 19:06:37.565151 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | SSH cmd err, output: <nil>:
I0914 19:06:37.565511 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetConfigRaw
I0914 19:06:37.566140 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
I0914 19:06:37.568703 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.569097 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.569132 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.569351 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:06:37.569551 29302 machine.go:88] provisioning docker machine ...
I0914 19:06:37.569568 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:37.569768 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
I0914 19:06:37.569927 29302 buildroot.go:166] provisioning hostname "multinode-040952-m02"
I0914 19:06:37.569954 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
I0914 19:06:37.570118 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.572245 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.572611 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.572640 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.572754 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:37.572896 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.573067 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.573182 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:37.573336 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:37.573757 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:37.573780 29302 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-040952-m02 && echo "multinode-040952-m02" | sudo tee /etc/hostname
I0914 19:06:37.710270 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952-m02
I0914 19:06:37.710294 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.712933 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.713287 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.713322 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.713438 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:37.713649 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.713830 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.713965 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:37.714153 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:37.714540 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:37.714569 29302 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-040952-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-040952-m02' | sudo tee -a /etc/hosts;
fi
fi
I0914 19:06:37.850271 29302 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0914 19:06:37.850302 29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
I0914 19:06:37.850321 29302 buildroot.go:174] setting up certificates
I0914 19:06:37.850331 29302 provision.go:83] configureAuth start
I0914 19:06:37.850343 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
I0914 19:06:37.850630 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
I0914 19:06:37.853071 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.853477 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.853512 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.853665 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.855889 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.856295 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.856327 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.856394 29302 provision.go:138] copyHostCerts
I0914 19:06:37.856430 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:06:37.856463 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
I0914 19:06:37.856473 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:06:37.856544 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
I0914 19:06:37.856653 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:06:37.856672 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
I0914 19:06:37.856676 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:06:37.856699 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
I0914 19:06:37.856741 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:06:37.856756 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
I0914 19:06:37.856762 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:06:37.856781 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
I0914 19:06:37.856823 29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952-m02 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube multinode-040952-m02]
I0914 19:06:37.904344 29302 provision.go:172] copyRemoteCerts
I0914 19:06:37.904397 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0914 19:06:37.904417 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.906652 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.906972 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.907008 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.907156 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:37.907312 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.907470 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:37.907613 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:38.000649 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0914 19:06:38.000741 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0914 19:06:38.025953 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
I0914 19:06:38.026028 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0914 19:06:38.048996 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0914 19:06:38.049067 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0914 19:06:38.072478 29302 provision.go:86] duration metric: configureAuth took 222.133675ms
I0914 19:06:38.072507 29302 buildroot.go:189] setting minikube options for container-runtime
I0914 19:06:38.072712 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:06:38.072733 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:38.072954 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:38.075633 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.075959 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:38.076005 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.076116 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:38.076304 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.076482 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.076626 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:38.076778 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:38.077069 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:38.077082 29302 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0914 19:06:38.199048 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0914 19:06:38.199074 29302 buildroot.go:70] root file system type: tmpfs
I0914 19:06:38.199195 29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0914 19:06:38.199220 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:38.201601 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.201971 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:38.201992 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.202160 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:38.202374 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.202529 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.202642 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:38.202785 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:38.203087 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:38.203150 29302 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.14"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0914 19:06:38.339052 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.14
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0914 19:06:38.339081 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:38.341807 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.342226 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:38.342261 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.342430 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:38.342621 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.342798 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.342954 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:38.343119 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:38.343432 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:38.343461 29302 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0914 19:06:39.223778 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0914 19:06:39.223805 29302 machine.go:91] provisioned docker machine in 1.654241082s
I0914 19:06:39.223818 29302 start.go:300] post-start starting for "multinode-040952-m02" (driver="kvm2")
I0914 19:06:39.223828 29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0914 19:06:39.223843 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.224176 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0914 19:06:39.224211 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:39.226901 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.227247 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.227280 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.227544 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.227745 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.227911 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.228053 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:39.321534 29302 ssh_runner.go:195] Run: cat /etc/os-release
I0914 19:06:39.325932 29302 command_runner.go:130] > NAME=Buildroot
I0914 19:06:39.325948 29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
I0914 19:06:39.325957 29302 command_runner.go:130] > ID=buildroot
I0914 19:06:39.325962 29302 command_runner.go:130] > VERSION_ID=2021.02.12
I0914 19:06:39.325972 29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0914 19:06:39.326365 29302 info.go:137] Remote host: Buildroot 2021.02.12
I0914 19:06:39.326381 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
I0914 19:06:39.326432 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
I0914 19:06:39.326501 29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
I0914 19:06:39.326513 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
I0914 19:06:39.326584 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0914 19:06:39.336967 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
I0914 19:06:39.360557 29302 start.go:303] post-start completed in 136.725285ms
I0914 19:06:39.360581 29302 fix.go:56] fixHost completed within 20.520003113s
I0914 19:06:39.360605 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:39.362948 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.363269 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.363315 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.363388 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.363595 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.363783 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.363936 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.364099 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:39.364460 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:39.364472 29302 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0914 19:06:39.486077 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718399.434257584
I0914 19:06:39.486101 29302 fix.go:206] guest clock: 1694718399.434257584
I0914 19:06:39.486110 29302 fix.go:219] Guest: 2023-09-14 19:06:39.434257584 +0000 UTC Remote: 2023-09-14 19:06:39.360584834 +0000 UTC m=+78.429360914 (delta=73.67275ms)
I0914 19:06:39.486128 29302 fix.go:190] guest clock delta is within tolerance: 73.67275ms
I0914 19:06:39.486135 29302 start.go:83] releasing machines lock for "multinode-040952-m02", held for 20.645613984s
I0914 19:06:39.486160 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.486442 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
I0914 19:06:39.488972 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.489301 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.489321 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.491933 29302 out.go:177] * Found network options:
I0914 19:06:39.493577 29302 out.go:177] - NO_PROXY=192.168.39.14
W0914 19:06:39.495217 29302 proxy.go:119] fail to check proxy env: Error ip not in block
I0914 19:06:39.495254 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.495809 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.495995 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.496072 29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0914 19:06:39.496116 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
W0914 19:06:39.496205 29302 proxy.go:119] fail to check proxy env: Error ip not in block
I0914 19:06:39.496278 29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0914 19:06:39.496299 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:39.498773 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.498969 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.499150 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.499181 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.499303 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.499318 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.499348 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.499474 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.499542 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.499625 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.499690 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.499747 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:39.499829 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.499990 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:39.587315 29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0914 19:06:39.587941 29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0914 19:06:39.588006 29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0914 19:06:39.610801 29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0914 19:06:39.610851 29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0914 19:06:39.610876 29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0914 19:06:39.610891 29302 start.go:469] detecting cgroup driver to use...
I0914 19:06:39.610989 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:06:39.629605 29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0914 19:06:39.630150 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0914 19:06:39.641201 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0914 19:06:39.651880 29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0914 19:06:39.651937 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0914 19:06:39.663251 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:06:39.674202 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0914 19:06:39.685211 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:06:39.696908 29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0914 19:06:39.709126 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0914 19:06:39.721014 29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0914 19:06:39.731728 29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0914 19:06:39.731788 29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0914 19:06:39.742220 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:06:39.854266 29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0914 19:06:39.871417 29302 start.go:469] detecting cgroup driver to use...
I0914 19:06:39.871488 29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0914 19:06:39.884609 29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0914 19:06:39.884650 29302 command_runner.go:130] > [Unit]
I0914 19:06:39.884657 29302 command_runner.go:130] > Description=Docker Application Container Engine
I0914 19:06:39.884663 29302 command_runner.go:130] > Documentation=https://docs.docker.com
I0914 19:06:39.884669 29302 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0914 19:06:39.884677 29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0914 19:06:39.884682 29302 command_runner.go:130] > StartLimitBurst=3
I0914 19:06:39.884689 29302 command_runner.go:130] > StartLimitIntervalSec=60
I0914 19:06:39.884693 29302 command_runner.go:130] > [Service]
I0914 19:06:39.884698 29302 command_runner.go:130] > Type=notify
I0914 19:06:39.884702 29302 command_runner.go:130] > Restart=on-failure
I0914 19:06:39.884708 29302 command_runner.go:130] > Environment=NO_PROXY=192.168.39.14
I0914 19:06:39.884715 29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0914 19:06:39.884726 29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0914 19:06:39.884735 29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0914 19:06:39.884743 29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0914 19:06:39.884752 29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0914 19:06:39.884761 29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0914 19:06:39.884768 29302 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0914 19:06:39.884787 29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0914 19:06:39.884796 29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0914 19:06:39.884802 29302 command_runner.go:130] > ExecStart=
I0914 19:06:39.884821 29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0914 19:06:39.884831 29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0914 19:06:39.884838 29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0914 19:06:39.884845 29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0914 19:06:39.884852 29302 command_runner.go:130] > LimitNOFILE=infinity
I0914 19:06:39.884856 29302 command_runner.go:130] > LimitNPROC=infinity
I0914 19:06:39.884862 29302 command_runner.go:130] > LimitCORE=infinity
I0914 19:06:39.884867 29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0914 19:06:39.884875 29302 command_runner.go:130] > # Only systemd 226 and above support this version.
I0914 19:06:39.884879 29302 command_runner.go:130] > TasksMax=infinity
I0914 19:06:39.884888 29302 command_runner.go:130] > TimeoutStartSec=0
I0914 19:06:39.884894 29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0914 19:06:39.884898 29302 command_runner.go:130] > Delegate=yes
I0914 19:06:39.884905 29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0914 19:06:39.884917 29302 command_runner.go:130] > KillMode=process
I0914 19:06:39.884923 29302 command_runner.go:130] > [Install]
I0914 19:06:39.884929 29302 command_runner.go:130] > WantedBy=multi-user.target
I0914 19:06:39.885921 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:06:39.902340 29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0914 19:06:39.919241 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:06:39.931882 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:06:39.944141 29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0914 19:06:39.980328 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:06:39.993054 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:06:40.010119 29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0914 19:06:40.010413 29302 ssh_runner.go:195] Run: which cri-dockerd
I0914 19:06:40.014171 29302 command_runner.go:130] > /usr/bin/cri-dockerd
I0914 19:06:40.014287 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0914 19:06:40.024688 29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0914 19:06:40.042167 29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0914 19:06:40.160404 29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0914 19:06:40.272827 29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0914 19:06:40.272855 29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0914 19:06:40.289795 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:06:40.398781 29302 ssh_runner.go:195] Run: sudo systemctl restart docker
I0914 19:06:41.803191 29302 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.40437357s)
I0914 19:06:41.803251 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:06:41.905435 29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0914 19:06:42.032291 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:06:42.160622 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:06:42.277173 29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0914 19:06:42.292786 29302 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I0914 19:06:42.294889 29302 out.go:177]
W0914 19:06:42.296193 29302 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0914 19:06:42.296210 29302 out.go:239] *
*
W0914 19:06:42.297001 29302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 19:06:42.298210 29302 out.go:177]
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-040952" : exit status 90
multinode_test.go:300: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-040952
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-040952 -n multinode-040952
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-040952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-040952 logs -n 25: (1.260679982s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m02.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952:/home/docker/cp-test_multinode-040952-m02_multinode-040952.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-040952 ssh -n multinode-040952 sudo cat | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | /home/docker/cp-test_multinode-040952-m02_multinode-040952.txt | | | | | |
| cp | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m03:/home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-040952 ssh -n multinode-040952-m03 sudo cat | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | /home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt | | | | | |
| cp | multinode-040952 cp testdata/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m03.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952:/home/docker/cp-test_multinode-040952-m03_multinode-040952.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-040952 ssh -n multinode-040952 sudo cat | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | /home/docker/cp-test_multinode-040952-m03_multinode-040952.txt | | | | | |
| cp | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m02:/home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt | | | | | |
| ssh | multinode-040952 ssh -n | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | multinode-040952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-040952 ssh -n multinode-040952-m02 sudo cat | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | /home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt | | | | | |
| node | multinode-040952 node stop m03 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| node | multinode-040952 node start | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-040952 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | |
| stop | -p multinode-040952 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:05 UTC |
| start | -p multinode-040952 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:05 UTC | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-040952 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:06 UTC | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/09/14 19:05:20
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.21.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0914 19:05:20.962804 29302 out.go:296] Setting OutFile to fd 1 ...
I0914 19:05:20.963060 29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 19:05:20.963070 29302 out.go:309] Setting ErrFile to fd 2...
I0914 19:05:20.963075 29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 19:05:20.963243 29302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 19:05:20.963781 29302 out.go:303] Setting JSON to false
I0914 19:05:20.964724 29302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2870,"bootTime":1694715451,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0914 19:05:20.964780 29302 start.go:138] virtualization: kvm guest
I0914 19:05:20.967109 29302 out.go:177] * [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0914 19:05:20.968562 29302 out.go:177] - MINIKUBE_LOCATION=17217
I0914 19:05:20.969984 29302 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0914 19:05:20.968648 29302 notify.go:220] Checking for updates...
I0914 19:05:20.972859 29302 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:05:20.974265 29302 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
I0914 19:05:20.975509 29302 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0914 19:05:20.976805 29302 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0914 19:05:20.978678 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:05:20.978756 29302 driver.go:373] Setting default libvirt URI to qemu:///system
I0914 19:05:20.979122 29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 19:05:20.979158 29302 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 19:05:20.994127 29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
I0914 19:05:20.994544 29302 main.go:141] libmachine: () Calling .GetVersion
I0914 19:05:20.994996 29302 main.go:141] libmachine: Using API Version 1
I0914 19:05:20.995035 29302 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 19:05:20.995534 29302 main.go:141] libmachine: () Calling .GetMachineName
I0914 19:05:20.995713 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:21.030837 29302 out.go:177] * Using the kvm2 driver based on existing profile
I0914 19:05:21.032222 29302 start.go:298] selected driver: kvm2
I0914 19:05:21.032235 29302 start.go:902] validating driver "kvm2" against &{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0914 19:05:21.032388 29302 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0914 19:05:21.032684 29302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 19:05:21.032744 29302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0914 19:05:21.046926 29302 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0914 19:05:21.047549 29302 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0914 19:05:21.047615 29302 cni.go:84] Creating CNI manager for ""
I0914 19:05:21.047628 29302 cni.go:136] 3 nodes found, recommending kindnet
I0914 19:05:21.047635 29302 start_flags.go:321] config:
{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
I0914 19:05:21.047846 29302 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 19:05:21.049820 29302 out.go:177] * Starting control plane node multinode-040952 in cluster multinode-040952
I0914 19:05:21.051078 29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0914 19:05:21.051117 29302 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
I0914 19:05:21.051132 29302 cache.go:57] Caching tarball of preloaded images
I0914 19:05:21.051200 29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0914 19:05:21.051211 29302 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0914 19:05:21.051357 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:05:21.051546 29302 start.go:365] acquiring machines lock for multinode-040952: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 19:05:21.051585 29302 start.go:369] acquired machines lock for "multinode-040952" in 22.658µs
I0914 19:05:21.051598 29302 start.go:96] Skipping create...Using existing machine configuration
I0914 19:05:21.051604 29302 fix.go:54] fixHost starting:
I0914 19:05:21.051851 29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 19:05:21.051877 29302 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 19:05:21.065211 29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
I0914 19:05:21.065673 29302 main.go:141] libmachine: () Calling .GetVersion
I0914 19:05:21.066137 29302 main.go:141] libmachine: Using API Version 1
I0914 19:05:21.066161 29302 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 19:05:21.066462 29302 main.go:141] libmachine: () Calling .GetMachineName
I0914 19:05:21.066623 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:21.066770 29302 main.go:141] libmachine: (multinode-040952) Calling .GetState
I0914 19:05:21.068116 29302 fix.go:102] recreateIfNeeded on multinode-040952: state=Stopped err=<nil>
I0914 19:05:21.068149 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
W0914 19:05:21.068327 29302 fix.go:128] unexpected machine state, will restart: <nil>
I0914 19:05:21.070143 29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952" ...
I0914 19:05:21.071437 29302 main.go:141] libmachine: (multinode-040952) Calling .Start
I0914 19:05:21.071593 29302 main.go:141] libmachine: (multinode-040952) Ensuring networks are active...
I0914 19:05:21.072249 29302 main.go:141] libmachine: (multinode-040952) Ensuring network default is active
I0914 19:05:21.072599 29302 main.go:141] libmachine: (multinode-040952) Ensuring network mk-multinode-040952 is active
I0914 19:05:21.072924 29302 main.go:141] libmachine: (multinode-040952) Getting domain xml...
I0914 19:05:21.073627 29302 main.go:141] libmachine: (multinode-040952) Creating domain...
I0914 19:05:22.290792 29302 main.go:141] libmachine: (multinode-040952) Waiting to get IP...
I0914 19:05:22.291697 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:22.292055 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:22.292102 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.292035 29331 retry.go:31] will retry after 308.296154ms: waiting for machine to come up
I0914 19:05:22.601636 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:22.602066 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:22.602099 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.602024 29331 retry.go:31] will retry after 317.837388ms: waiting for machine to come up
I0914 19:05:22.921508 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:22.921867 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:22.921901 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.921847 29331 retry.go:31] will retry after 471.086167ms: waiting for machine to come up
I0914 19:05:23.394404 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:23.394838 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:23.394871 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.394792 29331 retry.go:31] will retry after 484.306086ms: waiting for machine to come up
I0914 19:05:23.880204 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:23.880564 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:23.880583 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.880535 29331 retry.go:31] will retry after 618.601122ms: waiting for machine to come up
I0914 19:05:24.500881 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:24.501312 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:24.501338 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:24.501260 29331 retry.go:31] will retry after 909.340951ms: waiting for machine to come up
I0914 19:05:25.412225 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:25.412602 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:25.412643 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:25.412551 29331 retry.go:31] will retry after 1.126879825s: waiting for machine to come up
I0914 19:05:26.540657 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:26.541060 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:26.541092 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:26.541009 29331 retry.go:31] will retry after 1.102019824s: waiting for machine to come up
I0914 19:05:27.644123 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:27.644509 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:27.644533 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:27.644464 29331 retry.go:31] will retry after 1.486754446s: waiting for machine to come up
I0914 19:05:29.133039 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:29.133510 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:29.133535 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:29.133470 29331 retry.go:31] will retry after 2.117464983s: waiting for machine to come up
I0914 19:05:31.252796 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:31.253157 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:31.253189 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:31.253114 29331 retry.go:31] will retry after 2.386416431s: waiting for machine to come up
I0914 19:05:33.642490 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:33.643052 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:33.643079 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:33.643013 29331 retry.go:31] will retry after 2.611013914s: waiting for machine to come up
I0914 19:05:36.255832 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:36.256237 29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
I0914 19:05:36.256259 29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:36.256195 29331 retry.go:31] will retry after 4.317080822s: waiting for machine to come up
I0914 19:05:40.578744 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.579178 29302 main.go:141] libmachine: (multinode-040952) Found IP for machine: 192.168.39.14
I0914 19:05:40.579199 29302 main.go:141] libmachine: (multinode-040952) Reserving static IP address...
I0914 19:05:40.579208 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has current primary IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.579755 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.579790 29302 main.go:141] libmachine: (multinode-040952) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"}
I0914 19:05:40.579808 29302 main.go:141] libmachine: (multinode-040952) Reserved static IP address: 192.168.39.14
I0914 19:05:40.579828 29302 main.go:141] libmachine: (multinode-040952) Waiting for SSH to be available...
I0914 19:05:40.579844 29302 main.go:141] libmachine: (multinode-040952) DBG | Getting to WaitForSSH function...
I0914 19:05:40.581922 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.582219 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.582248 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.582419 29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH client type: external
I0914 19:05:40.582441 29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa (-rw-------)
I0914 19:05:40.582466 29302 main.go:141] libmachine: (multinode-040952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa -p 22] /usr/bin/ssh <nil>}
I0914 19:05:40.582480 29302 main.go:141] libmachine: (multinode-040952) DBG | About to run SSH command:
I0914 19:05:40.582491 29302 main.go:141] libmachine: (multinode-040952) DBG | exit 0
I0914 19:05:40.677125 29302 main.go:141] libmachine: (multinode-040952) DBG | SSH cmd err, output: <nil>:
I0914 19:05:40.677493 29302 main.go:141] libmachine: (multinode-040952) Calling .GetConfigRaw
I0914 19:05:40.678081 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:40.680506 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.680910 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.680945 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.681103 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:05:40.681284 29302 machine.go:88] provisioning docker machine ...
I0914 19:05:40.681323 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:40.681566 29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
I0914 19:05:40.681734 29302 buildroot.go:166] provisioning hostname "multinode-040952"
I0914 19:05:40.681755 29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
I0914 19:05:40.681906 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:40.683964 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.684284 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.684307 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.684417 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:40.684595 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.684736 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.684890 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:40.685062 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:40.685397 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:40.685412 29302 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-040952 && echo "multinode-040952" | sudo tee /etc/hostname
I0914 19:05:40.823251 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952
I0914 19:05:40.823283 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:40.825791 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.826169 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.826206 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.826321 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:40.826510 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.826658 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:40.826793 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:40.826952 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:40.827274 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:40.827292 29302 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-040952' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952/g' /etc/hosts;
else
echo '127.0.1.1 multinode-040952' | sudo tee -a /etc/hosts;
fi
fi
I0914 19:05:40.958211 29302 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0914 19:05:40.958234 29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
I0914 19:05:40.958251 29302 buildroot.go:174] setting up certificates
I0914 19:05:40.958258 29302 provision.go:83] configureAuth start
I0914 19:05:40.958270 29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
I0914 19:05:40.958579 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:40.960950 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.961279 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.961310 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.961443 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:40.963552 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.964139 29302 provision.go:138] copyHostCerts
I0914 19:05:40.966068 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:40.966080 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:05:40.966098 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:40.966106 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
I0914 19:05:40.966111 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:05:40.966169 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
I0914 19:05:40.966263 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:05:40.966284 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
I0914 19:05:40.966291 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:05:40.966314 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
I0914 19:05:40.966407 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:05:40.966426 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
I0914 19:05:40.966429 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:05:40.966455 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
I0914 19:05:40.966496 29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952 san=[192.168.39.14 192.168.39.14 localhost 127.0.0.1 minikube multinode-040952]
I0914 19:05:41.093709 29302 provision.go:172] copyRemoteCerts
I0914 19:05:41.093761 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0914 19:05:41.093784 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.096513 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.096889 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.096919 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.097089 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.097303 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.097427 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.097563 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:41.185959 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0914 19:05:41.186035 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0914 19:05:41.209076 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
I0914 19:05:41.209136 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0914 19:05:41.231360 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0914 19:05:41.231432 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0914 19:05:41.253346 29302 provision.go:86] duration metric: configureAuth took 295.075916ms
I0914 19:05:41.253364 29302 buildroot.go:189] setting minikube options for container-runtime
I0914 19:05:41.253583 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:05:41.253604 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:41.253889 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.256397 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.256706 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.256746 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.256796 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.256990 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.257147 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.257300 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.257433 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:41.257764 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:41.257781 29302 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0914 19:05:41.378606 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0914 19:05:41.378636 29302 buildroot.go:70] root file system type: tmpfs
I0914 19:05:41.378779 29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0914 19:05:41.378811 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.381344 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.381631 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.381653 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.381854 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.382017 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.382151 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.382256 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.382401 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:41.382846 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:41.382955 29302 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0914 19:05:41.524710 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0914 19:05:41.524751 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:41.527598 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.528021 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:41.528050 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:41.528233 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:41.528403 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.528520 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:41.528618 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:41.528833 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:41.529147 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:41.529175 29302 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0914 19:05:42.395560 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0914 19:05:42.395591 29302 machine.go:91] provisioned docker machine in 1.714293106s
I0914 19:05:42.395605 29302 start.go:300] post-start starting for "multinode-040952" (driver="kvm2")
I0914 19:05:42.395617 29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0914 19:05:42.395637 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.395990 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0914 19:05:42.396021 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.398544 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.398997 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.399029 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.399146 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.399327 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.399452 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.399604 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:42.490598 29302 ssh_runner.go:195] Run: cat /etc/os-release
I0914 19:05:42.494659 29302 command_runner.go:130] > NAME=Buildroot
I0914 19:05:42.494675 29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
I0914 19:05:42.494679 29302 command_runner.go:130] > ID=buildroot
I0914 19:05:42.494684 29302 command_runner.go:130] > VERSION_ID=2021.02.12
I0914 19:05:42.494689 29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0914 19:05:42.494714 29302 info.go:137] Remote host: Buildroot 2021.02.12
I0914 19:05:42.494726 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
I0914 19:05:42.494786 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
I0914 19:05:42.494859 29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
I0914 19:05:42.494867 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
I0914 19:05:42.494949 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0914 19:05:42.504158 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
I0914 19:05:42.526832 29302 start.go:303] post-start completed in 131.213234ms
I0914 19:05:42.526851 29302 fix.go:56] fixHost completed within 21.475246623s
I0914 19:05:42.526869 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.529527 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.529937 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.529986 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.530137 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.530338 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.530471 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.530592 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.530728 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:05:42.531030 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.14 22 <nil> <nil>}
I0914 19:05:42.531041 29302 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0914 19:05:42.654398 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718342.602499385
I0914 19:05:42.654428 29302 fix.go:206] guest clock: 1694718342.602499385
I0914 19:05:42.654435 29302 fix.go:219] Guest: 2023-09-14 19:05:42.602499385 +0000 UTC Remote: 2023-09-14 19:05:42.526854621 +0000 UTC m=+21.595630701 (delta=75.644764ms)
I0914 19:05:42.654452 29302 fix.go:190] guest clock delta is within tolerance: 75.644764ms
I0914 19:05:42.654457 29302 start.go:83] releasing machines lock for "multinode-040952", held for 21.60286411s
I0914 19:05:42.654478 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.654724 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:42.657287 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.657640 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.657674 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.657831 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.658283 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.658453 29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
I0914 19:05:42.658514 29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0914 19:05:42.658551 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.658645 29302 ssh_runner.go:195] Run: cat /version.json
I0914 19:05:42.658666 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
I0914 19:05:42.660832 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661105 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661257 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.661287 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661432 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:42.661445 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.661474 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:42.661579 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
I0914 19:05:42.661683 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.661749 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
I0914 19:05:42.661825 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.661884 29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
I0914 19:05:42.661944 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:42.661988 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
I0914 19:05:42.746664 29302 command_runner.go:130] > {"iso_version": "v1.31.0-1694468241-17194", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "08513a9f809e39764bdb93fc427d760a652ba5ea"}
I0914 19:05:42.747194 29302 ssh_runner.go:195] Run: systemctl --version
I0914 19:05:42.773722 29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0914 19:05:42.773771 29302 command_runner.go:130] > systemd 247 (247)
I0914 19:05:42.773794 29302 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0914 19:05:42.773870 29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0914 19:05:42.779663 29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0914 19:05:42.779691 29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0914 19:05:42.779753 29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0914 19:05:42.796458 29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0914 19:05:42.796494 29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0914 19:05:42.796506 29302 start.go:469] detecting cgroup driver to use...
I0914 19:05:42.796618 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:05:42.814727 29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0914 19:05:42.815085 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0914 19:05:42.825286 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0914 19:05:42.835590 29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0914 19:05:42.835639 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0914 19:05:42.845397 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:05:42.855075 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0914 19:05:42.864775 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:05:42.874625 29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0914 19:05:42.885032 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0914 19:05:42.895300 29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0914 19:05:42.904333 29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0914 19:05:42.904406 29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0914 19:05:42.913443 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:43.014402 29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0914 19:05:43.034266 29302 start.go:469] detecting cgroup driver to use...
I0914 19:05:43.034341 29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0914 19:05:43.046339 29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0914 19:05:43.047277 29302 command_runner.go:130] > [Unit]
I0914 19:05:43.047292 29302 command_runner.go:130] > Description=Docker Application Container Engine
I0914 19:05:43.047300 29302 command_runner.go:130] > Documentation=https://docs.docker.com
I0914 19:05:43.047311 29302 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0914 19:05:43.047321 29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0914 19:05:43.047330 29302 command_runner.go:130] > StartLimitBurst=3
I0914 19:05:43.047340 29302 command_runner.go:130] > StartLimitIntervalSec=60
I0914 19:05:43.047347 29302 command_runner.go:130] > [Service]
I0914 19:05:43.047354 29302 command_runner.go:130] > Type=notify
I0914 19:05:43.047374 29302 command_runner.go:130] > Restart=on-failure
I0914 19:05:43.047387 29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0914 19:05:43.047408 29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0914 19:05:43.047423 29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0914 19:05:43.047437 29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0914 19:05:43.047453 29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0914 19:05:43.047465 29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0914 19:05:43.047478 29302 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0914 19:05:43.047499 29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0914 19:05:43.047514 29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0914 19:05:43.047523 29302 command_runner.go:130] > ExecStart=
I0914 19:05:43.047549 29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0914 19:05:43.047562 29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0914 19:05:43.047574 29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0914 19:05:43.047589 29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0914 19:05:43.047600 29302 command_runner.go:130] > LimitNOFILE=infinity
I0914 19:05:43.047609 29302 command_runner.go:130] > LimitNPROC=infinity
I0914 19:05:43.047619 29302 command_runner.go:130] > LimitCORE=infinity
I0914 19:05:43.047632 29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0914 19:05:43.047647 29302 command_runner.go:130] > # Only systemd 226 and above support this version.
I0914 19:05:43.047657 29302 command_runner.go:130] > TasksMax=infinity
I0914 19:05:43.047668 29302 command_runner.go:130] > TimeoutStartSec=0
I0914 19:05:43.047682 29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0914 19:05:43.047692 29302 command_runner.go:130] > Delegate=yes
I0914 19:05:43.047706 29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0914 19:05:43.047716 29302 command_runner.go:130] > KillMode=process
I0914 19:05:43.047721 29302 command_runner.go:130] > [Install]
I0914 19:05:43.047732 29302 command_runner.go:130] > WantedBy=multi-user.target
I0914 19:05:43.047831 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:05:43.059348 29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0914 19:05:43.076586 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:05:43.091070 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:05:43.103630 29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0914 19:05:43.127566 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:05:43.140558 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:05:43.157218 29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0914 19:05:43.157773 29302 ssh_runner.go:195] Run: which cri-dockerd
I0914 19:05:43.161227 29302 command_runner.go:130] > /usr/bin/cri-dockerd
I0914 19:05:43.161332 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0914 19:05:43.168999 29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0914 19:05:43.184057 29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0914 19:05:43.293264 29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0914 19:05:43.399283 29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0914 19:05:43.399314 29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0914 19:05:43.416580 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:43.527824 29302 ssh_runner.go:195] Run: sudo systemctl restart docker
I0914 19:05:43.992016 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:05:44.097079 29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0914 19:05:44.209025 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:05:44.320513 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:44.428053 29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0914 19:05:44.444720 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:05:44.552820 29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0914 19:05:44.632416 29302 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0914 19:05:44.632491 29302 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0914 19:05:44.638252 29302 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0914 19:05:44.638276 29302 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0914 19:05:44.638286 29302 command_runner.go:130] > Device: 16h/22d Inode: 831 Links: 1
I0914 19:05:44.638296 29302 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0914 19:05:44.638305 29302 command_runner.go:130] > Access: 2023-09-14 19:05:44.514543091 +0000
I0914 19:05:44.638313 29302 command_runner.go:130] > Modify: 2023-09-14 19:05:44.514543091 +0000
I0914 19:05:44.638326 29302 command_runner.go:130] > Change: 2023-09-14 19:05:44.517543091 +0000
I0914 19:05:44.638332 29302 command_runner.go:130] > Birth: -
I0914 19:05:44.638715 29302 start.go:537] Will wait 60s for crictl version
I0914 19:05:44.638765 29302 ssh_runner.go:195] Run: which crictl
I0914 19:05:44.642939 29302 command_runner.go:130] > /usr/bin/crictl
I0914 19:05:44.643309 29302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0914 19:05:44.681642 29302 command_runner.go:130] > Version: 0.1.0
I0914 19:05:44.681667 29302 command_runner.go:130] > RuntimeName: docker
I0914 19:05:44.681672 29302 command_runner.go:130] > RuntimeVersion: 24.0.6
I0914 19:05:44.681678 29302 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0914 19:05:44.683160 29302 start.go:553] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1alpha2
I0914 19:05:44.683219 29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0914 19:05:44.707204 29302 command_runner.go:130] > 24.0.6
I0914 19:05:44.708405 29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0914 19:05:44.736598 29302 command_runner.go:130] > 24.0.6
I0914 19:05:44.738686 29302 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
I0914 19:05:44.738719 29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
I0914 19:05:44.741297 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:44.741690 29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
I0914 19:05:44.741717 29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
I0914 19:05:44.741894 29302 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0914 19:05:44.745777 29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0914 19:05:44.758482 29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0914 19:05:44.758533 29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0914 19:05:44.777353 29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
I0914 19:05:44.777369 29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
I0914 19:05:44.777375 29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
I0914 19:05:44.777380 29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
I0914 19:05:44.777385 29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I0914 19:05:44.777389 29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I0914 19:05:44.777395 29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0914 19:05:44.777399 29302 command_runner.go:130] > registry.k8s.io/pause:3.9
I0914 19:05:44.777404 29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0914 19:05:44.777409 29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0914 19:05:44.777499 29302 docker.go:636] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-scheduler:v1.28.1
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0914 19:05:44.777521 29302 docker.go:566] Images already preloaded, skipping extraction
I0914 19:05:44.777580 29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0914 19:05:44.796442 29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
I0914 19:05:44.796466 29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
I0914 19:05:44.796474 29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
I0914 19:05:44.796487 29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
I0914 19:05:44.796495 29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I0914 19:05:44.796502 29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I0914 19:05:44.796510 29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0914 19:05:44.796517 29302 command_runner.go:130] > registry.k8s.io/pause:3.9
I0914 19:05:44.796526 29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0914 19:05:44.796533 29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0914 19:05:44.796582 29302 docker.go:636] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-scheduler:v1.28.1
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0914 19:05:44.796603 29302 cache_images.go:84] Images are preloaded, skipping loading
I0914 19:05:44.796662 29302 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0914 19:05:44.826844 29302 command_runner.go:130] > cgroupfs
I0914 19:05:44.827994 29302 cni.go:84] Creating CNI manager for ""
I0914 19:05:44.828012 29302 cni.go:136] 3 nodes found, recommending kindnet
I0914 19:05:44.828028 29302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0914 19:05:44.828050 29302 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-040952 NodeName:multinode-040952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0914 19:05:44.828163 29302 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.14
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-040952"
kubeletExtraArgs:
node-ip: 192.168.39.14
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0914 19:05:44.828241 29302 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-040952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
[Install]
config:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0914 19:05:44.828290 29302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
I0914 19:05:44.837426 29302 command_runner.go:130] > kubeadm
I0914 19:05:44.837444 29302 command_runner.go:130] > kubectl
I0914 19:05:44.837448 29302 command_runner.go:130] > kubelet
I0914 19:05:44.837478 29302 binaries.go:44] Found k8s binaries, skipping transfer
I0914 19:05:44.837538 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0914 19:05:44.845710 29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I0914 19:05:44.861289 29302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0914 19:05:44.876364 29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I0914 19:05:44.892748 29302 ssh_runner.go:195] Run: grep 192.168.39.14 control-plane.minikube.internal$ /etc/hosts
I0914 19:05:44.896225 29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.14 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0914 19:05:44.908521 29302 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952 for IP: 192.168.39.14
I0914 19:05:44.908554 29302 certs.go:190] acquiring lock for shared ca certs: {Name:mk8231a646ae91c44c394a9ea29f867fd3f74220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:05:44.908702 29302 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key
I0914 19:05:44.908750 29302 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key
I0914 19:05:44.908825 29302 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key
I0914 19:05:44.908896 29302 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key.ba52ec04
I0914 19:05:44.908936 29302 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key
I0914 19:05:44.908959 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0914 19:05:44.908984 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0914 19:05:44.909003 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0914 19:05:44.909021 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0914 19:05:44.909038 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0914 19:05:44.909057 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0914 19:05:44.909069 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0914 19:05:44.909083 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0914 19:05:44.909133 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem (1338 bytes)
W0914 19:05:44.909164 29302 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506_empty.pem, impossibly tiny 0 bytes
I0914 19:05:44.909175 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem (1679 bytes)
I0914 19:05:44.909194 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem (1082 bytes)
I0914 19:05:44.909221 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem (1123 bytes)
I0914 19:05:44.909246 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem (1679 bytes)
I0914 19:05:44.909284 29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem (1708 bytes)
I0914 19:05:44.909309 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem -> /usr/share/ca-certificates/14506.pem
I0914 19:05:44.909322 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /usr/share/ca-certificates/145062.pem
I0914 19:05:44.909336 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:44.909846 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0914 19:05:44.934419 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0914 19:05:44.957511 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0914 19:05:44.980559 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0914 19:05:45.004923 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0914 19:05:45.028375 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0914 19:05:45.051817 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0914 19:05:45.074510 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0914 19:05:45.098260 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem --> /usr/share/ca-certificates/14506.pem (1338 bytes)
I0914 19:05:45.121292 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /usr/share/ca-certificates/145062.pem (1708 bytes)
I0914 19:05:45.144038 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0914 19:05:45.166026 29302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0914 19:05:45.181807 29302 ssh_runner.go:195] Run: openssl version
I0914 19:05:45.187376 29302 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0914 19:05:45.187428 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14506.pem && ln -fs /usr/share/ca-certificates/14506.pem /etc/ssl/certs/14506.pem"
I0914 19:05:45.196849 29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14506.pem
I0914 19:05:45.201160 29302 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
I0914 19:05:45.201218 29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
I0914 19:05:45.201259 29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14506.pem
I0914 19:05:45.206455 29302 command_runner.go:130] > 51391683
I0914 19:05:45.206657 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14506.pem /etc/ssl/certs/51391683.0"
I0914 19:05:45.216148 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145062.pem && ln -fs /usr/share/ca-certificates/145062.pem /etc/ssl/certs/145062.pem"
I0914 19:05:45.225498 29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145062.pem
I0914 19:05:45.229584 29302 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
I0914 19:05:45.229749 29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
I0914 19:05:45.229794 29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145062.pem
I0914 19:05:45.235209 29302 command_runner.go:130] > 3ec20f2e
I0914 19:05:45.235283 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145062.pem /etc/ssl/certs/3ec20f2e.0"
I0914 19:05:45.244557 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0914 19:05:45.253825 29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.258352 29302 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.258379 29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.258421 29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0914 19:05:45.263679 29302 command_runner.go:130] > b5213941
I0914 19:05:45.263724 29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0914 19:05:45.273201 29302 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0914 19:05:45.277387 29302 command_runner.go:130] > ca.crt
I0914 19:05:45.277404 29302 command_runner.go:130] > ca.key
I0914 19:05:45.277412 29302 command_runner.go:130] > healthcheck-client.crt
I0914 19:05:45.277419 29302 command_runner.go:130] > healthcheck-client.key
I0914 19:05:45.277426 29302 command_runner.go:130] > peer.crt
I0914 19:05:45.277433 29302 command_runner.go:130] > peer.key
I0914 19:05:45.277439 29302 command_runner.go:130] > server.crt
I0914 19:05:45.277446 29302 command_runner.go:130] > server.key
I0914 19:05:45.277502 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0914 19:05:45.283251 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.283310 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0914 19:05:45.289331 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.289405 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0914 19:05:45.295261 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.295329 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0914 19:05:45.300680 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.300910 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0914 19:05:45.306424 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.306599 29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0914 19:05:45.311906 29302 command_runner.go:130] > Certificate will not expire
I0914 19:05:45.312249 29302 kubeadm.go:404] StartCluster: {Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0914 19:05:45.312423 29302 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0914 19:05:45.331162 29302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0914 19:05:45.340190 29302 command_runner.go:130] > /var/lib/kubelet/config.yaml
I0914 19:05:45.340212 29302 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I0914 19:05:45.340221 29302 command_runner.go:130] > /var/lib/minikube/etcd:
I0914 19:05:45.340226 29302 command_runner.go:130] > member
I0914 19:05:45.340246 29302 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0914 19:05:45.340267 29302 kubeadm.go:636] restartCluster start
I0914 19:05:45.340309 29302 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0914 19:05:45.348452 29302 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0914 19:05:45.348894 29302 kubeconfig.go:135] verify returned: extract IP: "multinode-040952" does not appear in /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:05:45.348998 29302 kubeconfig.go:146] "multinode-040952" context is missing from /home/jenkins/minikube-integration/17217-7285/kubeconfig - will repair!
I0914 19:05:45.349266 29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:05:45.349662 29302 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:05:45.349849 29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0914 19:05:45.350444 29302 cert_rotation.go:137] Starting client certificate rotation controller
I0914 19:05:45.350587 29302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0914 19:05:45.358418 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:45.358456 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:45.368403 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:45.368429 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:45.368512 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:45.378454 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:45.879114 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:45.879187 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:45.890404 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:46.379073 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:46.379137 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:46.390460 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:46.878635 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:46.878712 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:46.890234 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:47.378771 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:47.378861 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:47.390972 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:47.879569 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:47.879636 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:47.891015 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:48.378618 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:48.378691 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:48.390037 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:48.878591 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:48.878656 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:48.889682 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:49.379283 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:49.379348 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:49.390298 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:49.878830 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:49.878929 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:49.890070 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:50.378594 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:50.378669 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:50.389750 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:50.879406 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:50.879474 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:50.890792 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:51.378749 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:51.378818 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:51.390362 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:51.878913 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:51.878983 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:51.890684 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:52.379313 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:52.379396 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:52.390412 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:52.878965 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:52.879054 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:52.890079 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:53.378659 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:53.378734 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:53.389835 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:53.879480 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:53.879549 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:53.890643 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:54.379316 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:54.379396 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:54.390543 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:54.879126 29302 api_server.go:166] Checking apiserver status ...
I0914 19:05:54.879190 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0914 19:05:54.890939 29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0914 19:05:55.358694 29302 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I0914 19:05:55.358719 29302 kubeadm.go:1128] stopping kube-system containers ...
I0914 19:05:55.358774 29302 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0914 19:05:55.380728 29302 command_runner.go:130] > 5ca168b256ec
I0914 19:05:55.380744 29302 command_runner.go:130] > bda018c9a602
I0914 19:05:55.380748 29302 command_runner.go:130] > fb2dbcea99e9
I0914 19:05:55.380752 29302 command_runner.go:130] > 2de9c2baa72f
I0914 19:05:55.380756 29302 command_runner.go:130] > 1dac2d18ee96
I0914 19:05:55.380760 29302 command_runner.go:130] > bd14e8416f22
I0914 19:05:55.380764 29302 command_runner.go:130] > 2c6b193d8f06
I0914 19:05:55.380768 29302 command_runner.go:130] > ac89590af9af
I0914 19:05:55.380771 29302 command_runner.go:130] > e7dd2a8d2bf2
I0914 19:05:55.380776 29302 command_runner.go:130] > 79de1cbad023
I0914 19:05:55.380780 29302 command_runner.go:130] > bdae306df774
I0914 19:05:55.380783 29302 command_runner.go:130] > 7ae1932584ff
I0914 19:05:55.380787 29302 command_runner.go:130] > 3204588282f3
I0914 19:05:55.380790 29302 command_runner.go:130] > c60a4b7edf2a
I0914 19:05:55.380794 29302 command_runner.go:130] > bf69af78fefd
I0914 19:05:55.380798 29302 command_runner.go:130] > 992d221cf3de
I0914 19:05:55.381007 29302 docker.go:462] Stopping containers: [5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de]
I0914 19:05:55.381063 29302 ssh_runner.go:195] Run: docker stop 5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de
I0914 19:05:55.400500 29302 command_runner.go:130] > 5ca168b256ec
I0914 19:05:55.400523 29302 command_runner.go:130] > bda018c9a602
I0914 19:05:55.400528 29302 command_runner.go:130] > fb2dbcea99e9
I0914 19:05:55.400532 29302 command_runner.go:130] > 2de9c2baa72f
I0914 19:05:55.400537 29302 command_runner.go:130] > 1dac2d18ee96
I0914 19:05:55.400545 29302 command_runner.go:130] > bd14e8416f22
I0914 19:05:55.400549 29302 command_runner.go:130] > 2c6b193d8f06
I0914 19:05:55.400915 29302 command_runner.go:130] > ac89590af9af
I0914 19:05:55.400933 29302 command_runner.go:130] > e7dd2a8d2bf2
I0914 19:05:55.400941 29302 command_runner.go:130] > 79de1cbad023
I0914 19:05:55.400947 29302 command_runner.go:130] > bdae306df774
I0914 19:05:55.400953 29302 command_runner.go:130] > 7ae1932584ff
I0914 19:05:55.400959 29302 command_runner.go:130] > 3204588282f3
I0914 19:05:55.400965 29302 command_runner.go:130] > c60a4b7edf2a
I0914 19:05:55.400970 29302 command_runner.go:130] > bf69af78fefd
I0914 19:05:55.400976 29302 command_runner.go:130] > 992d221cf3de
I0914 19:05:55.402045 29302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0914 19:05:55.416372 29302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0914 19:05:55.424910 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0914 19:05:55.424932 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0914 19:05:55.424943 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0914 19:05:55.424952 29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0914 19:05:55.424980 29302 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0914 19:05:55.425021 29302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0914 19:05:55.433299 29302 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0914 19:05:55.433317 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:55.549527 29302 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0914 19:05:55.549554 29302 command_runner.go:130] > [certs] Using existing ca certificate authority
I0914 19:05:55.549564 29302 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0914 19:05:55.549574 29302 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0914 19:05:55.549583 29302 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I0914 19:05:55.549599 29302 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I0914 19:05:55.549609 29302 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I0914 19:05:55.549615 29302 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I0914 19:05:55.549624 29302 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I0914 19:05:55.549633 29302 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0914 19:05:55.549640 29302 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0914 19:05:55.549657 29302 command_runner.go:130] > [certs] Using the existing "sa" key
I0914 19:05:55.549745 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:55.598988 29302 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0914 19:05:55.824313 29302 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0914 19:05:55.900894 29302 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0914 19:05:56.276915 29302 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0914 19:05:56.339928 29302 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0914 19:05:56.342661 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:56.405203 29302 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0914 19:05:56.406633 29302 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0914 19:05:56.407055 29302 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0914 19:05:56.524034 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:56.589683 29302 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0914 19:05:56.589714 29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0914 19:05:56.593812 29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0914 19:05:56.595032 29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0914 19:05:56.597321 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:05:56.696497 29302 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0914 19:05:56.699815 29302 api_server.go:52] waiting for apiserver process to appear ...
I0914 19:05:56.699898 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:56.713289 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:57.226345 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:57.726390 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:58.226095 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:58.726390 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:59.226644 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:05:59.241067 29302 command_runner.go:130] > 1693
I0914 19:05:59.241381 29302 api_server.go:72] duration metric: took 2.541565826s to wait for apiserver process to appear ...
I0914 19:05:59.241402 29302 api_server.go:88] waiting for apiserver healthz status ...
I0914 19:05:59.241422 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:02.195757 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0914 19:06:02.195786 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0914 19:06:02.195796 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:02.307219 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0914 19:06:02.307250 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0914 19:06:02.807963 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:02.814842 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0914 19:06:02.814876 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0914 19:06:03.307503 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:03.315888 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0914 19:06:03.315914 29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0914 19:06:03.807505 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:03.812721 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
ok
I0914 19:06:03.812788 29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
I0914 19:06:03.812794 29302 round_trippers.go:469] Request Headers:
I0914 19:06:03.812802 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:03.812809 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:03.821345 29302 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0914 19:06:03.821376 29302 round_trippers.go:577] Response Headers:
I0914 19:06:03.821387 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:03.821396 29302 round_trippers.go:580] Content-Length: 263
I0914 19:06:03.821402 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:03 GMT
I0914 19:06:03.821410 29302 round_trippers.go:580] Audit-Id: a2a9e97f-3007-4290-8f99-481d06fc6049
I0914 19:06:03.821417 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:03.821424 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:03.821433 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:03.821483 29302 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.1",
"gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
"gitTreeState": "clean",
"buildDate": "2023-08-24T11:16:30Z",
"goVersion": "go1.20.7",
"compiler": "gc",
"platform": "linux/amd64"
}
I0914 19:06:03.821569 29302 api_server.go:141] control plane version: v1.28.1
I0914 19:06:03.821589 29302 api_server.go:131] duration metric: took 4.580178903s to wait for apiserver health ...
I0914 19:06:03.821600 29302 cni.go:84] Creating CNI manager for ""
I0914 19:06:03.821611 29302 cni.go:136] 3 nodes found, recommending kindnet
I0914 19:06:03.823525 29302 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0914 19:06:03.825085 29302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0914 19:06:03.832345 29302 command_runner.go:130] > File: /opt/cni/bin/portmap
I0914 19:06:03.832364 29302 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I0914 19:06:03.832370 29302 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I0914 19:06:03.832380 29302 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0914 19:06:03.832391 29302 command_runner.go:130] > Access: 2023-09-14 19:05:33.824543091 +0000
I0914 19:06:03.832399 29302 command_runner.go:130] > Modify: 2023-09-12 03:24:25.000000000 +0000
I0914 19:06:03.832416 29302 command_runner.go:130] > Change: 2023-09-14 19:05:31.874543091 +0000
I0914 19:06:03.832422 29302 command_runner.go:130] > Birth: -
I0914 19:06:03.832466 29302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
I0914 19:06:03.832475 29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0914 19:06:03.901488 29302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0914 19:06:05.205755 29302 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0914 19:06:05.209188 29302 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0914 19:06:05.212024 29302 command_runner.go:130] > serviceaccount/kindnet unchanged
I0914 19:06:05.225376 29302 command_runner.go:130] > daemonset.apps/kindnet configured
I0914 19:06:05.229823 29302 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.32829993s)
I0914 19:06:05.229853 29302 system_pods.go:43] waiting for kube-system pods to appear ...
I0914 19:06:05.229964 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:05.229975 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.229982 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.229988 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.234117 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:05.234139 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.234149 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.234158 29302 round_trippers.go:580] Audit-Id: 78bdb13b-ed79-4db3-8008-4289bacf78fd
I0914 19:06:05.234172 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.234180 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.234188 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.234195 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.236145 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
I0914 19:06:05.239946 29302 system_pods.go:59] 12 kube-system pods found
I0914 19:06:05.239984 29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0914 19:06:05.239998 29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0914 19:06:05.240008 29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0914 19:06:05.240015 29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
I0914 19:06:05.240026 29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
I0914 19:06:05.240036 29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0914 19:06:05.240054 29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0914 19:06:05.240067 29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
I0914 19:06:05.240073 29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
I0914 19:06:05.240087 29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0914 19:06:05.240101 29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0914 19:06:05.240113 29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0914 19:06:05.240123 29302 system_pods.go:74] duration metric: took 10.263188ms to wait for pod list to return data ...
I0914 19:06:05.240135 29302 node_conditions.go:102] verifying NodePressure condition ...
I0914 19:06:05.240193 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
I0914 19:06:05.240202 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.240212 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.240223 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.245363 29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0914 19:06:05.245382 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.245393 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.245401 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.245416 29302 round_trippers.go:580] Audit-Id: ee9162aa-d308-4bb2-927d-55e7e1011d87
I0914 19:06:05.245424 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.245435 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.245471 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.245800 29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13790 chars]
I0914 19:06:05.246934 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:05.246965 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:05.246982 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:05.246996 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:05.247002 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:05.247012 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:05.247020 29302 node_conditions.go:105] duration metric: took 6.879016ms to run NodePressure ...
I0914 19:06:05.247043 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0914 19:06:05.487041 29302 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0914 19:06:05.487069 29302 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0914 19:06:05.487097 29302 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I0914 19:06:05.487490 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
I0914 19:06:05.487506 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.487516 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.487526 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.491797 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:05.491820 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.491831 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.491840 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.491848 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.491857 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.491866 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.491875 29302 round_trippers.go:580] Audit-Id: 9814298e-c189-437e-bfca-dbe0a19423d2
I0914 19:06:05.492280 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29761 chars]
I0914 19:06:05.493221 29302 kubeadm.go:787] kubelet initialised
I0914 19:06:05.493240 29302 kubeadm.go:788] duration metric: took 6.131207ms waiting for restarted kubelet to initialise ...
I0914 19:06:05.493249 29302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:05.493307 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:05.493322 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.493334 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.493347 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.496849 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:05.496867 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.496876 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.496885 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.496892 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.496901 29302 round_trippers.go:580] Audit-Id: a7031aa1-24df-4c90-9e52-85f8f96f783c
I0914 19:06:05.496912 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.496921 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.497873 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
I0914 19:06:05.500273 29302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.500335 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:05.500343 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.500350 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.500356 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.502411 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.502429 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.502441 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.502449 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.502459 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.502469 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.502478 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.502490 29302 round_trippers.go:580] Audit-Id: f347830a-65d2-4cb4-8423-8b8fc5cc870f
I0914 19:06:05.502830 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:05.503304 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.503318 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.503328 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.503337 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.505839 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.505853 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.505864 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.505870 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.505875 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.505880 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.505886 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.505894 29302 round_trippers.go:580] Audit-Id: 71902073-b1b8-4c71-b1d1-af71d48217f1
I0914 19:06:05.506071 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.506467 29302 pod_ready.go:97] node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.506490 29302 pod_ready.go:81] duration metric: took 6.199179ms waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.506501 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.506518 29302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.506572 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:05.506583 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.506593 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.506606 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.508379 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.508391 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.508397 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.508403 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.508408 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.508414 29302 round_trippers.go:580] Audit-Id: adfe03d4-2812-4ba5-98dd-67afaa529395
I0914 19:06:05.508419 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.508425 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.508772 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:05.509094 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.509104 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.509111 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.509116 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.510985 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.511003 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.511012 29302 round_trippers.go:580] Audit-Id: 0ee321ba-916a-449f-a719-2eb1a4973cde
I0914 19:06:05.511019 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.511028 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.511036 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.511044 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.511057 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.511184 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.511454 29302 pod_ready.go:97] node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.511470 29302 pod_ready.go:81] duration metric: took 4.945047ms waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.511477 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.511489 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.511533 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
I0914 19:06:05.511540 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.511546 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.511552 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.513172 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.513189 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.513198 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.513206 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.513213 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.513222 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.513230 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.513246 29302 round_trippers.go:580] Audit-Id: 98886ad5-cb3e-42c1-9236-b75a8e09f5f5
I0914 19:06:05.513380 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"786","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7850 chars]
I0914 19:06:05.513760 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.513773 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.513780 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.513786 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.515437 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:05.515456 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.515464 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.515472 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.515481 29302 round_trippers.go:580] Audit-Id: cc794f2f-df9b-4b8c-8271-303fbb3bda2a
I0914 19:06:05.515489 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.515502 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.515510 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.515753 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.516001 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.516014 29302 pod_ready.go:81] duration metric: took 4.515313ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.516021 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.516027 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.516066 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
I0914 19:06:05.516073 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.516080 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.516086 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.518245 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.518263 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.518277 29302 round_trippers.go:580] Audit-Id: 6779b7f0-25f9-49d1-be85-87a44d8c3552
I0914 19:06:05.518286 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.518294 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.518301 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.518314 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.518322 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.518564 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"783","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7436 chars]
I0914 19:06:05.630264 29302 request.go:629] Waited for 111.324976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.630352 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:05.630359 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.630372 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.630382 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.632981 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.633000 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.633006 29302 round_trippers.go:580] Audit-Id: fd7872d6-edd4-429f-97f2-b2ec1c12de54
I0914 19:06:05.633012 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.633017 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.633023 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.633028 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.633036 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.633196 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:05.633629 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.633656 29302 pod_ready.go:81] duration metric: took 117.619154ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:05.633669 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:05.633680 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:05.830043 29302 request.go:629] Waited for 196.287848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:05.830099 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:05.830103 29302 round_trippers.go:469] Request Headers:
I0914 19:06:05.830111 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:05.830118 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:05.832762 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:05.832785 29302 round_trippers.go:577] Response Headers:
I0914 19:06:05.832794 29302 round_trippers.go:580] Audit-Id: 3c18be9a-6c71-4025-be83-5fc9c53246a5
I0914 19:06:05.832801 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:05.832808 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:05.832815 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:05.832822 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:05.832829 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:05.833118 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0914 19:06:06.029994 29302 request.go:629] Waited for 196.460915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:06.030079 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:06.030087 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.030099 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.030108 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.032502 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.032520 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.032527 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:05 GMT
I0914 19:06:06.032532 29302 round_trippers.go:580] Audit-Id: 9d3f52cf-02ab-4abb-92c1-8a7d06224f0e
I0914 19:06:06.032538 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.032542 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.032547 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.032553 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.032888 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
I0914 19:06:06.033151 29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:06.033165 29302 pod_ready.go:81] duration metric: took 399.477836ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.033173 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.230655 29302 request.go:629] Waited for 197.428191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:06.230712 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:06.230718 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.230725 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.230733 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.233365 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.233384 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.233391 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.233397 29302 round_trippers.go:580] Audit-Id: 53af8c6b-f3d3-4507-ba18-bcb4d7a95376
I0914 19:06:06.233406 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.233422 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.233431 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.233443 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.233771 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
I0914 19:06:06.430710 29302 request.go:629] Waited for 196.348215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:06.430762 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:06.430769 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.430779 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.430788 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.433906 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:06.433930 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.433942 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.433951 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.433960 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.433969 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.433985 29302 round_trippers.go:580] Audit-Id: 1280bf02-d81c-4bca-b4e5-275129840268
I0914 19:06:06.433994 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.434112 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"772","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3204 chars]
I0914 19:06:06.434453 29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:06.434474 29302 pod_ready.go:81] duration metric: took 401.294532ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.434488 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
I0914 19:06:06.630939 29302 request.go:629] Waited for 196.385647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:06.631022 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:06.631030 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.631042 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.631051 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.633497 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.633520 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.633530 29302 round_trippers.go:580] Audit-Id: 1dc1f940-384d-494a-8e64-361f1ad205ba
I0914 19:06:06.633543 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.633552 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.633562 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.633573 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.633584 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.633766 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"788","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5928 chars]
I0914 19:06:06.830679 29302 request.go:629] Waited for 196.393813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:06.830735 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:06.830740 29302 round_trippers.go:469] Request Headers:
I0914 19:06:06.830747 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:06.830754 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:06.833354 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:06.833375 29302 round_trippers.go:577] Response Headers:
I0914 19:06:06.833382 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:06.833387 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:06.833392 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:06.833397 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:06.833402 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:06.833407 29302 round_trippers.go:580] Audit-Id: a24b66f4-fa51-4df4-9bc5-590f310c8108
I0914 19:06:06.833985 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:06.834382 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:06.834408 29302 pod_ready.go:81] duration metric: took 399.910926ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
E0914 19:06:06.834420 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:06.834433 29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:07.030857 29302 request.go:629] Waited for 196.352242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:07.030940 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:07.030951 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.030964 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.030977 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.034225 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:07.034245 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.034253 29302 round_trippers.go:580] Audit-Id: 71cfae50-3c69-4f2b-8709-aad710c8dec2
I0914 19:06:07.034260 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.034268 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.034276 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.034289 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.034298 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:06 GMT
I0914 19:06:07.034501 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:07.230128 29302 request.go:629] Waited for 195.265564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.230211 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.230221 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.230229 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.230235 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.233612 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:07.233631 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.233641 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.233648 29302 round_trippers.go:580] Audit-Id: c6e16c92-92f1-4f61-b0d2-523db2c467d1
I0914 19:06:07.233656 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.233665 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.233675 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.233684 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.234058 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:07.234344 29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:07.234368 29302 pod_ready.go:81] duration metric: took 399.923264ms waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
E0914 19:06:07.234381 29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
I0914 19:06:07.234393 29302 pod_ready.go:38] duration metric: took 1.741133779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:07.234417 29302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0914 19:06:07.250231 29302 command_runner.go:130] > -16
I0914 19:06:07.250255 29302 ops.go:34] apiserver oom_adj: -16
I0914 19:06:07.250263 29302 kubeadm.go:640] restartCluster took 21.909989817s
I0914 19:06:07.250271 29302 kubeadm.go:406] StartCluster complete in 21.938026901s
I0914 19:06:07.250290 29302 settings.go:142] acquiring lock: {Name:mkaf2d84e9fceec2029b98353d3d8cae1b369e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:06:07.250389 29302 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:06:07.251059 29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0914 19:06:07.251279 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0914 19:06:07.251383 29302 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0914 19:06:07.253531 29302 out.go:177] * Enabled addons:
I0914 19:06:07.251517 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:06:07.251534 29302 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17217-7285/kubeconfig
I0914 19:06:07.255467 29302 addons.go:502] enable addons completed in 4.093858ms: enabled=[]
I0914 19:06:07.255670 29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0914 19:06:07.255997 29302 round_trippers.go:463] GET https://192.168.39.14:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0914 19:06:07.256010 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.256017 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.256025 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.263309 29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0914 19:06:07.263329 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.263340 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.263348 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.263354 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.263359 29302 round_trippers.go:580] Content-Length: 291
I0914 19:06:07.263365 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.263370 29302 round_trippers.go:580] Audit-Id: 5a75d744-b3cd-40e6-abf4-7b1c8daac075
I0914 19:06:07.263377 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.263397 29302 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9776e459-4280-488a-924c-4e921bbd9495","resourceVersion":"796","creationTimestamp":"2023-09-14T19:01:40Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0914 19:06:07.263508 29302 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-040952" context rescaled to 1 replicas
I0914 19:06:07.263529 29302 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0914 19:06:07.264985 29302 out.go:177] * Verifying Kubernetes components...
I0914 19:06:07.266359 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0914 19:06:07.389385 29302 command_runner.go:130] > apiVersion: v1
I0914 19:06:07.389403 29302 command_runner.go:130] > data:
I0914 19:06:07.389408 29302 command_runner.go:130] > Corefile: |
I0914 19:06:07.389411 29302 command_runner.go:130] > .:53 {
I0914 19:06:07.389415 29302 command_runner.go:130] > log
I0914 19:06:07.389421 29302 command_runner.go:130] > errors
I0914 19:06:07.389425 29302 command_runner.go:130] > health {
I0914 19:06:07.389429 29302 command_runner.go:130] > lameduck 5s
I0914 19:06:07.389433 29302 command_runner.go:130] > }
I0914 19:06:07.389437 29302 command_runner.go:130] > ready
I0914 19:06:07.389443 29302 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0914 19:06:07.389447 29302 command_runner.go:130] > pods insecure
I0914 19:06:07.389455 29302 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0914 19:06:07.389473 29302 command_runner.go:130] > ttl 30
I0914 19:06:07.389477 29302 command_runner.go:130] > }
I0914 19:06:07.389483 29302 command_runner.go:130] > prometheus :9153
I0914 19:06:07.389487 29302 command_runner.go:130] > hosts {
I0914 19:06:07.389493 29302 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I0914 19:06:07.389497 29302 command_runner.go:130] > fallthrough
I0914 19:06:07.389501 29302 command_runner.go:130] > }
I0914 19:06:07.389508 29302 command_runner.go:130] > forward . /etc/resolv.conf {
I0914 19:06:07.389513 29302 command_runner.go:130] > max_concurrent 1000
I0914 19:06:07.389517 29302 command_runner.go:130] > }
I0914 19:06:07.389520 29302 command_runner.go:130] > cache 30
I0914 19:06:07.389527 29302 command_runner.go:130] > loop
I0914 19:06:07.389532 29302 command_runner.go:130] > reload
I0914 19:06:07.389541 29302 command_runner.go:130] > loadbalance
I0914 19:06:07.389549 29302 command_runner.go:130] > }
I0914 19:06:07.389558 29302 command_runner.go:130] > kind: ConfigMap
I0914 19:06:07.389564 29302 command_runner.go:130] > metadata:
I0914 19:06:07.389573 29302 command_runner.go:130] > creationTimestamp: "2023-09-14T19:01:40Z"
I0914 19:06:07.389585 29302 command_runner.go:130] > name: coredns
I0914 19:06:07.389594 29302 command_runner.go:130] > namespace: kube-system
I0914 19:06:07.389604 29302 command_runner.go:130] > resourceVersion: "404"
I0914 19:06:07.389612 29302 command_runner.go:130] > uid: 77b79b35-a304-4075-b4c4-6b8a52cfe75c
I0914 19:06:07.389643 29302 node_ready.go:35] waiting up to 6m0s for node "multinode-040952" to be "Ready" ...
I0914 19:06:07.389797 29302 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0914 19:06:07.431021 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.431047 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.431059 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.431069 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.434336 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:07.434359 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.434367 29302 round_trippers.go:580] Audit-Id: f0218504-ef8b-4fee-a836-3f16c97e6d1d
I0914 19:06:07.434372 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.434378 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.434383 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.434389 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.434399 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.434888 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:07.630657 29302 request.go:629] Waited for 195.358734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.630713 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:07.630720 29302 round_trippers.go:469] Request Headers:
I0914 19:06:07.630729 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:07.630738 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:07.635002 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:07.635021 29302 round_trippers.go:577] Response Headers:
I0914 19:06:07.635027 29302 round_trippers.go:580] Audit-Id: 0e51cba7-34eb-44c3-be48-8785725a128f
I0914 19:06:07.635033 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:07.635038 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:07.635043 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:07.635048 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:07.635053 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:07 GMT
I0914 19:06:07.635788 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:08.136884 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:08.136903 29302 round_trippers.go:469] Request Headers:
I0914 19:06:08.136913 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:08.136919 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:08.140137 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:08.140160 29302 round_trippers.go:577] Response Headers:
I0914 19:06:08.140168 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:08 GMT
I0914 19:06:08.140173 29302 round_trippers.go:580] Audit-Id: 9ec77217-1afd-42b6-aaf7-211e85629e48
I0914 19:06:08.140179 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:08.140184 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:08.140189 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:08.140194 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:08.140344 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:08.637040 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:08.637079 29302 round_trippers.go:469] Request Headers:
I0914 19:06:08.637091 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:08.637101 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:08.639714 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:08.639733 29302 round_trippers.go:577] Response Headers:
I0914 19:06:08.639744 29302 round_trippers.go:580] Audit-Id: d47f9fd4-8dec-46b1-8ce9-436c0350c5ca
I0914 19:06:08.639752 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:08.639760 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:08.639769 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:08.639779 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:08.639788 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:08 GMT
I0914 19:06:08.640112 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:09.136649 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:09.136682 29302 round_trippers.go:469] Request Headers:
I0914 19:06:09.136690 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:09.136696 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:09.139686 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:09.139704 29302 round_trippers.go:577] Response Headers:
I0914 19:06:09.139715 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:09.139724 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:09.139733 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:09.139739 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:09 GMT
I0914 19:06:09.139745 29302 round_trippers.go:580] Audit-Id: ae97ecdc-ac59-4df9-80fb-ab01ff2852ec
I0914 19:06:09.139750 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:09.140167 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:09.636845 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:09.636866 29302 round_trippers.go:469] Request Headers:
I0914 19:06:09.636874 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:09.636880 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:09.639508 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:09.639525 29302 round_trippers.go:577] Response Headers:
I0914 19:06:09.639534 29302 round_trippers.go:580] Audit-Id: 2a2efe7f-361b-45a2-b3cb-a7e9e84043e9
I0914 19:06:09.639541 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:09.639549 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:09.639558 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:09.639568 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:09.639578 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:09 GMT
I0914 19:06:09.639997 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
I0914 19:06:09.640405 29302 node_ready.go:58] node "multinode-040952" has status "Ready":"False"
I0914 19:06:10.136599 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.136624 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.136638 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.136648 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.140273 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:10.140297 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.140306 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.140313 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.140320 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.140332 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.140340 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.140347 29302 round_trippers.go:580] Audit-Id: 1af6dc6d-a25f-4a81-86a3-d239224c606e
I0914 19:06:10.140506 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:10.140798 29302 node_ready.go:49] node "multinode-040952" has status "Ready":"True"
I0914 19:06:10.140815 29302 node_ready.go:38] duration metric: took 2.751153874s waiting for node "multinode-040952" to be "Ready" ...
I0914 19:06:10.140825 29302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:10.140877 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:10.140887 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.140897 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.140907 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.145518 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:10.145535 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.145542 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.145547 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.145557 29302 round_trippers.go:580] Audit-Id: d738ec8e-27bb-4210-8329-89e64df5055c
I0914 19:06:10.145569 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.145579 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.145590 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.146881 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83954 chars]
I0914 19:06:10.149263 29302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
I0914 19:06:10.149331 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:10.149342 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.149353 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.149364 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.151221 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:10.151235 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.151241 29302 round_trippers.go:580] Audit-Id: 9dce5aa8-17a9-43c4-9448-421e8ef000fe
I0914 19:06:10.151247 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.151255 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.151264 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.151281 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.151288 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.151447 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:10.151815 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.151829 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.151839 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.151847 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.154035 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:10.154047 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.154053 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.154058 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.154063 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.154069 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.154075 29302 round_trippers.go:580] Audit-Id: f451201e-e118-40ff-8809-e06aa3aa8567
I0914 19:06:10.154084 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.154352 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:10.154718 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:10.154731 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.154742 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.154752 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.156468 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:10.156482 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.156491 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.156501 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.156513 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.156524 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.156538 29302 round_trippers.go:580] Audit-Id: 056aca82-7d21-4539-9de8-316f54300fbb
I0914 19:06:10.156548 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.156671 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:10.157120 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.157136 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.157147 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.157162 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.159000 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:10.159014 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.159023 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.159031 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.159039 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.159049 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.159059 29302 round_trippers.go:580] Audit-Id: 053f7e6a-3d64-496b-a692-e6d8d7de77dc
I0914 19:06:10.159074 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.159292 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:10.660315 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:10.660343 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.660354 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.660364 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.662669 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:10.662688 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.662694 29302 round_trippers.go:580] Audit-Id: 0b5959bf-4f92-40f5-bff0-64259ee8d0e9
I0914 19:06:10.662703 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.662711 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.662723 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.662732 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.662744 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.663162 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:10.663793 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:10.663810 29302 round_trippers.go:469] Request Headers:
I0914 19:06:10.663822 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:10.663830 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:10.667280 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:10.667294 29302 round_trippers.go:577] Response Headers:
I0914 19:06:10.667299 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:10.667304 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:10.667310 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:10 GMT
I0914 19:06:10.667315 29302 round_trippers.go:580] Audit-Id: adc471fd-2452-48eb-9634-4a15a4129e27
I0914 19:06:10.667320 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:10.667325 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:10.667519 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:11.160702 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:11.160731 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.160744 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.160753 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.164208 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:11.164227 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.164234 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.164240 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.164261 29302 round_trippers.go:580] Audit-Id: 3b81510c-ceb9-488e-bc2e-b21d77b051e2
I0914 19:06:11.164273 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.164281 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.164290 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.164555 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:11.165152 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:11.165174 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.165187 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.165197 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.168098 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:11.168117 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.168125 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.168133 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.168142 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.168151 29302 round_trippers.go:580] Audit-Id: 15145bd3-b367-4e99-b3ce-0ae58ef5c733
I0914 19:06:11.168161 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.168168 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.168530 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:11.660168 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:11.660193 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.660205 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.660216 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.663403 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:11.663424 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.663434 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.663442 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.663449 29302 round_trippers.go:580] Audit-Id: 3362ce2b-8605-45fd-8885-3eaeb408ef56
I0914 19:06:11.663457 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.663466 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.663476 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.664334 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:11.664760 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:11.664775 29302 round_trippers.go:469] Request Headers:
I0914 19:06:11.664785 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:11.664795 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:11.671505 29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0914 19:06:11.671522 29302 round_trippers.go:577] Response Headers:
I0914 19:06:11.671530 29302 round_trippers.go:580] Audit-Id: 654293a2-0981-4bec-9543-4726a90c72a3
I0914 19:06:11.671539 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:11.671551 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:11.671560 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:11.671567 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:11.671576 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:11 GMT
I0914 19:06:11.671723 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:12.160486 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:12.160512 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.160524 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.160534 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.163604 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:12.163624 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.163634 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.163644 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.163652 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.163661 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.163674 29302 round_trippers.go:580] Audit-Id: 746f41fe-b54a-4602-ba74-6665d07e9fc7
I0914 19:06:12.163683 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.164257 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:12.164698 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:12.164712 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.164721 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.164731 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.166907 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:12.166920 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.166926 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.166934 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.166942 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.166953 29302 round_trippers.go:580] Audit-Id: e83a6e6d-40cb-4779-8c0a-8f5c050ff286
I0914 19:06:12.166961 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.166970 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.167376 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:12.167641 29302 pod_ready.go:102] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"False"
I0914 19:06:12.660012 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:12.660034 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.660051 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.660059 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.664300 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:12.664327 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.664338 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.664345 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.664352 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.664360 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.664369 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.664384 29302 round_trippers.go:580] Audit-Id: 49e3af30-584c-4ef5-942f-2f32701b7bc7
I0914 19:06:12.665270 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
I0914 19:06:12.665705 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:12.665719 29302 round_trippers.go:469] Request Headers:
I0914 19:06:12.665729 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:12.665738 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:12.668068 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:12.668088 29302 round_trippers.go:577] Response Headers:
I0914 19:06:12.668097 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:12.668105 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:12.668112 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:12 GMT
I0914 19:06:12.668120 29302 round_trippers.go:580] Audit-Id: 28f046b6-f759-4197-80f7-730e48f958ff
I0914 19:06:12.668128 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:12.668142 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:12.668260 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.159876 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
I0914 19:06:13.159904 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.159912 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.159918 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.163892 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:13.163917 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.163928 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.163937 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.163944 29302 round_trippers.go:580] Audit-Id: 2bafd162-6571-48ef-8c6f-4b72770d2047
I0914 19:06:13.163952 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.163966 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.163976 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.165138 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
I0914 19:06:13.165753 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.165771 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.165782 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.165791 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.168088 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.168105 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.168112 29302 round_trippers.go:580] Audit-Id: 767659c2-2c07-4c69-b006-9d19ff6d9f6d
I0914 19:06:13.168118 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.168123 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.168128 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.168135 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.168143 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.168401 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.168681 29302 pod_ready.go:92] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:13.168695 29302 pod_ready.go:81] duration metric: took 3.01941396s waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
I0914 19:06:13.168703 29302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:13.168801 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:13.168814 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.168832 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.168846 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.171347 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.171368 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.171375 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.171380 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.171388 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.171397 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.171404 29302 round_trippers.go:580] Audit-Id: b18d0768-dc31-460c-beed-e50e3a19d6cf
I0914 19:06:13.171411 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.172044 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:13.172379 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.172391 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.172399 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.172405 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.175143 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.175157 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.175163 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.175168 29302 round_trippers.go:580] Audit-Id: f6242de5-c366-4c79-aa4f-5b2c5ce0d01e
I0914 19:06:13.175174 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.175182 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.175190 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.175200 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.176009 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.176284 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:13.176295 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.176301 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.176307 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.178355 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.178376 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.178382 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.178387 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.178393 29302 round_trippers.go:580] Audit-Id: 8172c157-f43e-42e0-b3a6-8cbd28c89432
I0914 19:06:13.178401 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.178409 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.178417 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.178832 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:13.179275 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.179292 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.179302 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.179309 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.180983 29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0914 19:06:13.180994 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.180999 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.181004 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.181009 29302 round_trippers.go:580] Audit-Id: 7d797daa-6bd3-4f35-8046-01886aa5fa4e
I0914 19:06:13.181014 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.181019 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.181024 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.181219 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:13.682300 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:13.682333 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.682342 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.682347 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.685143 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.685160 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.685166 29302 round_trippers.go:580] Audit-Id: 0910f73d-781a-443b-b8e1-0d453e50ba92
I0914 19:06:13.685172 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.685177 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.685182 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.685187 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.685192 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.685503 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:13.685920 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:13.685934 29302 round_trippers.go:469] Request Headers:
I0914 19:06:13.685941 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:13.685947 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:13.688227 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:13.688240 29302 round_trippers.go:577] Response Headers:
I0914 19:06:13.688246 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:13.688252 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:13.688260 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:13.688268 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:13.688281 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:13 GMT
I0914 19:06:13.688288 29302 round_trippers.go:580] Audit-Id: 078b7d2a-29bc-4729-9a02-7236c4049ad7
I0914 19:06:13.688474 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.182102 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:14.182125 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.182133 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.182140 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.187517 29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0914 19:06:14.187544 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.187554 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.187562 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.187569 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.187577 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.187586 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.187594 29302 round_trippers.go:580] Audit-Id: dd780464-2280-4b93-b398-b175b603d0fe
I0914 19:06:14.188035 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
I0914 19:06:14.188554 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.188572 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.188583 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.188592 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.190606 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:14.190620 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.190626 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.190632 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.190637 29302 round_trippers.go:580] Audit-Id: 104efd51-1025-4755-af8b-f207cfcdb912
I0914 19:06:14.190642 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.190647 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.190652 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.190979 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.682687 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
I0914 19:06:14.682711 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.682719 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.682725 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.690728 29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0914 19:06:14.690764 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.690775 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.690783 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.690791 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.690799 29302 round_trippers.go:580] Audit-Id: 4dc518a5-6cbd-4561-8ed6-e72b82b2abda
I0914 19:06:14.690806 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.690814 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.690995 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"887","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6071 chars]
I0914 19:06:14.691406 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.691420 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.691427 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.691433 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.697743 29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0914 19:06:14.697765 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.697774 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.697779 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.697784 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.697789 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.697794 29302 round_trippers.go:580] Audit-Id: 07d3511e-72f3-415a-b985-0c38f9c2dc48
I0914 19:06:14.697799 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.698080 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.698416 29302 pod_ready.go:92] pod "etcd-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:14.698432 29302 pod_ready.go:81] duration metric: took 1.529723471s waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.698448 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.698508 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
I0914 19:06:14.698517 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.698524 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.698530 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.703391 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:14.703406 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.703412 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.703418 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.703423 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.703428 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.703433 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.703439 29302 round_trippers.go:580] Audit-Id: 0b9ff4df-c192-426d-837d-19a8ddc6d994
I0914 19:06:14.703718 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"871","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7606 chars]
I0914 19:06:14.704127 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.704140 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.704147 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.704153 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.706425 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:14.706444 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.706451 29302 round_trippers.go:580] Audit-Id: 6eee19bb-2b91-4350-b2ae-7edfbd41930d
I0914 19:06:14.706457 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.706462 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.706467 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.706472 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.706478 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.706615 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.706908 29302 pod_ready.go:92] pod "kube-apiserver-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:14.706921 29302 pod_ready.go:81] duration metric: took 8.465952ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.706930 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.706986 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
I0914 19:06:14.706996 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.707007 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.707017 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.710085 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:14.710105 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.710115 29302 round_trippers.go:580] Audit-Id: 37a4af49-de22-42c5-8342-96bdccfba829
I0914 19:06:14.710126 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.710135 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.710143 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.710152 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.710160 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.710726 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"884","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7174 chars]
I0914 19:06:14.830503 29302 request.go:629] Waited for 119.282235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.830554 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:14.830558 29302 round_trippers.go:469] Request Headers:
I0914 19:06:14.830566 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:14.830572 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:14.833064 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:14.833083 29302 round_trippers.go:577] Response Headers:
I0914 19:06:14.833090 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:14.833095 29302 round_trippers.go:580] Audit-Id: 7a8584d4-7b4d-4f0c-a673-2711303dfb2c
I0914 19:06:14.833100 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:14.833106 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:14.833110 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:14.833116 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:14.833241 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:14.833562 29302 pod_ready.go:92] pod "kube-controller-manager-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:14.833577 29302 pod_ready.go:81] duration metric: took 126.641384ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:14.833587 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.030888 29302 request.go:629] Waited for 197.237265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:15.030946 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
I0914 19:06:15.030951 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.030960 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.030966 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.034339 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.034359 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.034366 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.034374 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.034386 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.034394 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.034408 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:14 GMT
I0914 19:06:15.034416 29302 round_trippers.go:580] Audit-Id: 3c39cfc6-1f06-4726-9679-50e437a9b84d
I0914 19:06:15.034690 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0914 19:06:15.230480 29302 request.go:629] Waited for 195.333524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:15.230552 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
I0914 19:06:15.230557 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.230565 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.230574 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.234304 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.234329 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.234339 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.234347 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.234359 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.234366 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.234377 29302 round_trippers.go:580] Audit-Id: 4a324e73-8fa1-482f-bde6-ae80be99f721
I0914 19:06:15.234386 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.234528 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
I0914 19:06:15.234774 29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:15.234787 29302 pod_ready.go:81] duration metric: took 401.195035ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.234796 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.430003 29302 request.go:629] Waited for 195.152769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:15.430096 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
I0914 19:06:15.430104 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.430118 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.430142 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.433237 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.433271 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.433281 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.433290 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.433300 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.433309 29302 round_trippers.go:580] Audit-Id: 92d372f9-e9c9-4d13-8b75-1b3ebd7f2435
I0914 19:06:15.433321 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.433329 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.433627 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
I0914 19:06:15.630434 29302 request.go:629] Waited for 196.369841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:15.630534 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
I0914 19:06:15.630546 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.630557 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.630568 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.633799 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.633824 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.633834 29302 round_trippers.go:580] Audit-Id: 8ea32575-14e9-412a-ba38-fd00269447f5
I0914 19:06:15.633844 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.633852 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.633864 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.633873 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.633887 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.634144 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"891","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3084 chars]
I0914 19:06:15.634401 29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:15.634416 29302 pod_ready.go:81] duration metric: took 399.614214ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.634430 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
I0914 19:06:15.830846 29302 request.go:629] Waited for 196.353294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:15.830928 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
I0914 19:06:15.830933 29302 round_trippers.go:469] Request Headers:
I0914 19:06:15.830945 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:15.830952 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:15.834221 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:15.834246 29302 round_trippers.go:577] Response Headers:
I0914 19:06:15.834259 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:15.834267 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:15.834274 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:15 GMT
I0914 19:06:15.834282 29302 round_trippers.go:580] Audit-Id: 44182567-ce38-4fce-a842-f78410d89ee9
I0914 19:06:15.834289 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:15.834298 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:15.834802 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"798","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
I0914 19:06:16.030675 29302 request.go:629] Waited for 195.45562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.030731 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.030736 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.030743 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.030750 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.034236 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:16.034260 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.034267 29302 round_trippers.go:580] Audit-Id: e468604d-7ce9-469a-b812-ed3c9c650d6e
I0914 19:06:16.034275 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.034281 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.034286 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.034291 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.034297 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.034614 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:16.034941 29302 pod_ready.go:92] pod "kube-proxy-hbsmt" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:16.034956 29302 pod_ready.go:81] duration metric: took 400.519289ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
I0914 19:06:16.034964 29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:16.230342 29302 request.go:629] Waited for 195.324407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.230449 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.230454 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.230462 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.230470 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.233547 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:16.233564 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.233572 29302 round_trippers.go:580] Audit-Id: 224fde99-6866-4d6c-81fe-2f97bc0c6734
I0914 19:06:16.233577 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.233587 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.233592 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.233597 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.233602 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.233823 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:16.430509 29302 request.go:629] Waited for 196.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.430573 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.430580 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.430590 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.430600 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.433517 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:16.433535 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.433542 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.433559 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.433565 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.433571 29302 round_trippers.go:580] Audit-Id: 1da1d693-84a7-4480-b07f-7a386588f044
I0914 19:06:16.433576 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.433581 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.433983 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:16.630679 29302 request.go:629] Waited for 196.348452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.630764 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:16.630769 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.630776 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.630783 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.633557 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:16.633575 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.633582 29302 round_trippers.go:580] Audit-Id: 2136e32a-148d-4e1d-825d-95e56e17f7f3
I0914 19:06:16.633589 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.633597 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.633605 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.633612 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.633629 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.634402 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:16.830072 29302 request.go:629] Waited for 195.313935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.830145 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:16.830152 29302 round_trippers.go:469] Request Headers:
I0914 19:06:16.830160 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:16.830168 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:16.832962 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:16.832981 29302 round_trippers.go:577] Response Headers:
I0914 19:06:16.832988 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:16.832993 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:16.832998 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:16.833006 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:16.833011 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:16 GMT
I0914 19:06:16.833016 29302 round_trippers.go:580] Audit-Id: 685468aa-007f-4cd0-908f-286f4b9b8738
I0914 19:06:16.833566 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:17.334599 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:17.334622 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.334645 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.334652 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.337790 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:17.337810 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.337817 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.337823 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.337828 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.337835 29302 round_trippers.go:580] Audit-Id: 13885e51-e7a2-41bd-a4e6-27c1810b7f5b
I0914 19:06:17.337843 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.337850 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.338071 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:17.338439 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:17.338455 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.338465 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.338474 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.340824 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:17.340837 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.340843 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.340848 29302 round_trippers.go:580] Audit-Id: e2df7950-3f43-43ac-a2ff-9ebcb6aba048
I0914 19:06:17.340854 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.340862 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.340871 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.340883 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.341277 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:17.834981 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:17.835006 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.835015 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.835021 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.837948 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:17.837973 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.837984 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.837992 29302 round_trippers.go:580] Audit-Id: bf96bd3c-445d-4267-b684-9a852b7ce0ca
I0914 19:06:17.838000 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.838008 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.838020 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.838027 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.838816 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
I0914 19:06:17.839223 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:17.839236 29302 round_trippers.go:469] Request Headers:
I0914 19:06:17.839244 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:17.839250 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:17.842020 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:17.842042 29302 round_trippers.go:577] Response Headers:
I0914 19:06:17.842052 29302 round_trippers.go:580] Audit-Id: 58f6c61f-2107-4d49-bc25-beaf577ebc0b
I0914 19:06:17.842063 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:17.842073 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:17.842084 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:17.842094 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:17.842104 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:17 GMT
I0914 19:06:17.842191 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:18.334912 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
I0914 19:06:18.334936 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.334944 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.334950 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.337727 29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0914 19:06:18.337753 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.337763 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.337772 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.337784 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.337793 29302 round_trippers.go:580] Audit-Id: 91452a7a-9433-48f7-bb48-08448530a97b
I0914 19:06:18.337804 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.337811 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.338243 29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"894","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4904 chars]
I0914 19:06:18.338636 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
I0914 19:06:18.338654 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.338664 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.338674 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.342026 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:18.342059 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.342068 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.342078 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.342085 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.342096 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.342104 29302 round_trippers.go:580] Audit-Id: a5dad678-33fe-4c2f-a5f5-c10a6380266e
I0914 19:06:18.342118 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.342444 29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
I0914 19:06:18.342720 29302 pod_ready.go:92] pod "kube-scheduler-multinode-040952" in "kube-system" namespace has status "Ready":"True"
I0914 19:06:18.342732 29302 pod_ready.go:81] duration metric: took 2.30776305s waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
I0914 19:06:18.342741 29302 pod_ready.go:38] duration metric: took 8.201906021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0914 19:06:18.342758 29302 api_server.go:52] waiting for apiserver process to appear ...
I0914 19:06:18.342802 29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0914 19:06:18.356335 29302 command_runner.go:130] > 1693
I0914 19:06:18.356824 29302 api_server.go:72] duration metric: took 11.093271286s to wait for apiserver process to appear ...
I0914 19:06:18.356842 29302 api_server.go:88] waiting for apiserver healthz status ...
I0914 19:06:18.356862 29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
I0914 19:06:18.362653 29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
ok
I0914 19:06:18.362710 29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
I0914 19:06:18.362717 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.362725 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.362731 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.363650 29302 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0914 19:06:18.363667 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.363677 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.363686 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.363694 29302 round_trippers.go:580] Content-Length: 263
I0914 19:06:18.363711 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.363719 29302 round_trippers.go:580] Audit-Id: 01d336c4-24b2-4b6e-a634-c932a4f80f56
I0914 19:06:18.363728 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.363733 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.363748 29302 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.1",
"gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
"gitTreeState": "clean",
"buildDate": "2023-08-24T11:16:30Z",
"goVersion": "go1.20.7",
"compiler": "gc",
"platform": "linux/amd64"
}
I0914 19:06:18.363790 29302 api_server.go:141] control plane version: v1.28.1
I0914 19:06:18.363805 29302 api_server.go:131] duration metric: took 6.957442ms to wait for apiserver health ...
I0914 19:06:18.363814 29302 system_pods.go:43] waiting for kube-system pods to appear ...
I0914 19:06:18.363875 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:18.363883 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.363889 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.363900 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.367955 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:18.367989 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.367997 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.368005 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.368013 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.368025 29302 round_trippers.go:580] Audit-Id: 4a4def47-e1cc-4f97-a173-69327418d154
I0914 19:06:18.368035 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.368044 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.369884 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
I0914 19:06:18.373265 29302 system_pods.go:59] 12 kube-system pods found
I0914 19:06:18.373287 29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
I0914 19:06:18.373292 29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
I0914 19:06:18.373296 29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
I0914 19:06:18.373299 29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
I0914 19:06:18.373303 29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
I0914 19:06:18.373307 29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
I0914 19:06:18.373312 29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
I0914 19:06:18.373315 29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
I0914 19:06:18.373326 29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
I0914 19:06:18.373335 29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
I0914 19:06:18.373339 29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
I0914 19:06:18.373342 29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
I0914 19:06:18.373347 29302 system_pods.go:74] duration metric: took 9.528517ms to wait for pod list to return data ...
I0914 19:06:18.373355 29302 default_sa.go:34] waiting for default service account to be created ...
I0914 19:06:18.430623 29302 request.go:629] Waited for 57.191118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
I0914 19:06:18.430678 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
I0914 19:06:18.430682 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.430689 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.430695 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.433750 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:18.433768 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.433775 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.433780 29302 round_trippers.go:580] Content-Length: 261
I0914 19:06:18.433785 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.433790 29302 round_trippers.go:580] Audit-Id: f58f454f-de35-4fde-b782-3e31600d0a05
I0914 19:06:18.433795 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.433803 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.433808 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.433825 29302 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"751abfd7-43aa-4bf5-a223-71659884f01c","resourceVersion":"335","creationTimestamp":"2023-09-14T19:01:53Z"}}]}
I0914 19:06:18.433967 29302 default_sa.go:45] found service account: "default"
I0914 19:06:18.433981 29302 default_sa.go:55] duration metric: took 60.621039ms for default service account to be created ...
I0914 19:06:18.433987 29302 system_pods.go:116] waiting for k8s-apps to be running ...
I0914 19:06:18.630408 29302 request.go:629] Waited for 196.359387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:18.630467 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
I0914 19:06:18.630472 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.630480 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.630486 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.635088 29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0914 19:06:18.635116 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.635126 29302 round_trippers.go:580] Audit-Id: 40dbf5e6-bdfd-4c25-924c-528834eef0a7
I0914 19:06:18.635135 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.635142 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.635150 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.635159 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.635173 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.636346 29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
I0914 19:06:18.639989 29302 system_pods.go:86] 12 kube-system pods found
I0914 19:06:18.640017 29302 system_pods.go:89] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
I0914 19:06:18.640024 29302 system_pods.go:89] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
I0914 19:06:18.640031 29302 system_pods.go:89] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
I0914 19:06:18.640037 29302 system_pods.go:89] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
I0914 19:06:18.640043 29302 system_pods.go:89] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
I0914 19:06:18.640050 29302 system_pods.go:89] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
I0914 19:06:18.640058 29302 system_pods.go:89] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
I0914 19:06:18.640064 29302 system_pods.go:89] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
I0914 19:06:18.640071 29302 system_pods.go:89] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
I0914 19:06:18.640080 29302 system_pods.go:89] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
I0914 19:06:18.640088 29302 system_pods.go:89] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
I0914 19:06:18.640095 29302 system_pods.go:89] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
I0914 19:06:18.640110 29302 system_pods.go:126] duration metric: took 206.118337ms to wait for k8s-apps to be running ...
I0914 19:06:18.640118 29302 system_svc.go:44] waiting for kubelet service to be running ....
I0914 19:06:18.640169 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0914 19:06:18.654395 29302 system_svc.go:56] duration metric: took 14.272365ms WaitForService to wait for kubelet.
I0914 19:06:18.654416 29302 kubeadm.go:581] duration metric: took 11.390867757s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0914 19:06:18.654443 29302 node_conditions.go:102] verifying NodePressure condition ...
I0914 19:06:18.830833 29302 request.go:629] Waited for 176.33044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
I0914 19:06:18.830908 29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
I0914 19:06:18.830915 29302 round_trippers.go:469] Request Headers:
I0914 19:06:18.830925 29302 round_trippers.go:473] Accept: application/json, */*
I0914 19:06:18.830934 29302 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0914 19:06:18.833992 29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0914 19:06:18.834011 29302 round_trippers.go:577] Response Headers:
I0914 19:06:18.834020 29302 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
I0914 19:06:18.834029 29302 round_trippers.go:580] Date: Thu, 14 Sep 2023 19:06:18 GMT
I0914 19:06:18.834038 29302 round_trippers.go:580] Audit-Id: 78eec727-aee2-400e-8c95-4146a9496a91
I0914 19:06:18.834047 29302 round_trippers.go:580] Cache-Control: no-cache, private
I0914 19:06:18.834056 29302 round_trippers.go:580] Content-Type: application/json
I0914 19:06:18.834064 29302 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
I0914 19:06:18.834284 29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
I0914 19:06:18.835016 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:18.835038 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:18.835048 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:18.835052 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:18.835058 29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0914 19:06:18.835067 29302 node_conditions.go:123] node cpu capacity is 2
I0914 19:06:18.835073 29302 node_conditions.go:105] duration metric: took 180.624501ms to run NodePressure ...
I0914 19:06:18.835093 29302 start.go:228] waiting for startup goroutines ...
I0914 19:06:18.835102 29302 start.go:233] waiting for cluster config update ...
I0914 19:06:18.835115 29302 start.go:242] writing updated cluster config ...
I0914 19:06:18.835683 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:06:18.835796 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:06:18.838910 29302 out.go:177] * Starting worker node multinode-040952-m02 in cluster multinode-040952
I0914 19:06:18.840147 29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0914 19:06:18.840163 29302 cache.go:57] Caching tarball of preloaded images
I0914 19:06:18.840249 29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0914 19:06:18.840261 29302 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0914 19:06:18.840334 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:06:18.840476 29302 start.go:365] acquiring machines lock for multinode-040952-m02: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 19:06:18.840512 29302 start.go:369] acquired machines lock for "multinode-040952-m02" in 19.707µs
I0914 19:06:18.840566 29302 start.go:96] Skipping create...Using existing machine configuration
I0914 19:06:18.840575 29302 fix.go:54] fixHost starting: m02
I0914 19:06:18.840830 29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 19:06:18.840857 29302 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 19:06:18.855469 29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
I0914 19:06:18.855890 29302 main.go:141] libmachine: () Calling .GetVersion
I0914 19:06:18.856329 29302 main.go:141] libmachine: Using API Version 1
I0914 19:06:18.856352 29302 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 19:06:18.856677 29302 main.go:141] libmachine: () Calling .GetMachineName
I0914 19:06:18.856891 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:18.857065 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
I0914 19:06:18.858712 29302 fix.go:102] recreateIfNeeded on multinode-040952-m02: state=Stopped err=<nil>
I0914 19:06:18.858735 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
W0914 19:06:18.858914 29302 fix.go:128] unexpected machine state, will restart: <nil>
I0914 19:06:18.861118 29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952-m02" ...
I0914 19:06:18.862649 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .Start
I0914 19:06:18.862832 29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring networks are active...
I0914 19:06:18.863554 29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network default is active
I0914 19:06:18.863887 29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network mk-multinode-040952 is active
I0914 19:06:18.864247 29302 main.go:141] libmachine: (multinode-040952-m02) Getting domain xml...
I0914 19:06:18.864791 29302 main.go:141] libmachine: (multinode-040952-m02) Creating domain...
I0914 19:06:20.114677 29302 main.go:141] libmachine: (multinode-040952-m02) Waiting to get IP...
I0914 19:06:20.115697 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:20.116116 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:20.116177 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.116093 29537 retry.go:31] will retry after 292.793167ms: waiting for machine to come up
I0914 19:06:20.410624 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:20.411041 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:20.411062 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.411011 29537 retry.go:31] will retry after 329.185161ms: waiting for machine to come up
I0914 19:06:20.741486 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:20.741956 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:20.741984 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.741922 29537 retry.go:31] will retry after 372.179082ms: waiting for machine to come up
I0914 19:06:21.115108 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:21.115492 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:21.115522 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.115446 29537 retry.go:31] will retry after 552.546331ms: waiting for machine to come up
I0914 19:06:21.669165 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:21.669673 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:21.669702 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.669630 29537 retry.go:31] will retry after 641.98724ms: waiting for machine to come up
I0914 19:06:22.313770 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:22.314305 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:22.314344 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:22.314258 29537 retry.go:31] will retry after 792.672163ms: waiting for machine to come up
I0914 19:06:23.108201 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:23.108628 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:23.108656 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.108582 29537 retry.go:31] will retry after 820.609535ms: waiting for machine to come up
I0914 19:06:23.930887 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:23.931350 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:23.931383 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.931293 29537 retry.go:31] will retry after 933.919914ms: waiting for machine to come up
I0914 19:06:24.866306 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:24.866762 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:24.866796 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:24.866720 29537 retry.go:31] will retry after 1.175445783s: waiting for machine to come up
I0914 19:06:26.044181 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:26.044639 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:26.044674 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:26.044595 29537 retry.go:31] will retry after 1.659114662s: waiting for machine to come up
I0914 19:06:27.705347 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:27.705796 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:27.705832 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:27.705738 29537 retry.go:31] will retry after 2.838813162s: waiting for machine to come up
I0914 19:06:30.546592 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:30.547049 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:30.547092 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:30.547042 29537 retry.go:31] will retry after 2.43743272s: waiting for machine to come up
I0914 19:06:32.987818 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:32.988277 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
I0914 19:06:32.988300 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:32.988246 29537 retry.go:31] will retry after 4.479558003s: waiting for machine to come up
I0914 19:06:37.471961 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.472352 29302 main.go:141] libmachine: (multinode-040952-m02) Found IP for machine: 192.168.39.16
I0914 19:06:37.472379 29302 main.go:141] libmachine: (multinode-040952-m02) Reserving static IP address...
I0914 19:06:37.472392 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.472813 29302 main.go:141] libmachine: (multinode-040952-m02) Reserved static IP address: 192.168.39.16
I0914 19:06:37.472867 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.472882 29302 main.go:141] libmachine: (multinode-040952-m02) Waiting for SSH to be available...
I0914 19:06:37.472912 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"}
I0914 19:06:37.472930 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Getting to WaitForSSH function...
I0914 19:06:37.474853 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.475216 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.475243 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.475331 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH client type: external
I0914 19:06:37.475371 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa (-rw-------)
I0914 19:06:37.475423 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0914 19:06:37.475447 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | About to run SSH command:
I0914 19:06:37.475460 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | exit 0
I0914 19:06:37.565151 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | SSH cmd err, output: <nil>:
I0914 19:06:37.565511 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetConfigRaw
I0914 19:06:37.566140 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
I0914 19:06:37.568703 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.569097 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.569132 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.569351 29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
I0914 19:06:37.569551 29302 machine.go:88] provisioning docker machine ...
I0914 19:06:37.569568 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:37.569768 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
I0914 19:06:37.569927 29302 buildroot.go:166] provisioning hostname "multinode-040952-m02"
I0914 19:06:37.569954 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
I0914 19:06:37.570118 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.572245 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.572611 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.572640 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.572754 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:37.572896 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.573067 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.573182 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:37.573336 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:37.573757 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:37.573780 29302 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-040952-m02 && echo "multinode-040952-m02" | sudo tee /etc/hostname
I0914 19:06:37.710270 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952-m02
I0914 19:06:37.710294 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.712933 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.713287 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.713322 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.713438 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:37.713649 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.713830 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.713965 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:37.714153 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:37.714540 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:37.714569 29302 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-040952-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-040952-m02' | sudo tee -a /etc/hosts;
fi
fi
I0914 19:06:37.850271 29302 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0914 19:06:37.850302 29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
I0914 19:06:37.850321 29302 buildroot.go:174] setting up certificates
I0914 19:06:37.850331 29302 provision.go:83] configureAuth start
I0914 19:06:37.850343 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
I0914 19:06:37.850630 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
I0914 19:06:37.853071 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.853477 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.853512 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.853665 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.855889 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.856295 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.856327 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.856394 29302 provision.go:138] copyHostCerts
I0914 19:06:37.856430 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:06:37.856463 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
I0914 19:06:37.856473 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
I0914 19:06:37.856544 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
I0914 19:06:37.856653 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:06:37.856672 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
I0914 19:06:37.856676 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
I0914 19:06:37.856699 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
I0914 19:06:37.856741 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:06:37.856756 29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
I0914 19:06:37.856762 29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
I0914 19:06:37.856781 29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
I0914 19:06:37.856823 29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952-m02 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube multinode-040952-m02]
I0914 19:06:37.904344 29302 provision.go:172] copyRemoteCerts
I0914 19:06:37.904397 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0914 19:06:37.904417 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:37.906652 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.906972 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:37.907008 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:37.907156 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:37.907312 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:37.907470 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:37.907613 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:38.000649 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0914 19:06:38.000741 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0914 19:06:38.025953 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
I0914 19:06:38.026028 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0914 19:06:38.048996 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0914 19:06:38.049067 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0914 19:06:38.072478 29302 provision.go:86] duration metric: configureAuth took 222.133675ms
I0914 19:06:38.072507 29302 buildroot.go:189] setting minikube options for container-runtime
I0914 19:06:38.072712 29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 19:06:38.072733 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:38.072954 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:38.075633 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.075959 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:38.076005 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.076116 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:38.076304 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.076482 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.076626 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:38.076778 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:38.077069 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:38.077082 29302 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0914 19:06:38.199048 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0914 19:06:38.199074 29302 buildroot.go:70] root file system type: tmpfs
I0914 19:06:38.199195 29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0914 19:06:38.199220 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:38.201601 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.201971 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:38.201992 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.202160 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:38.202374 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.202529 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.202642 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:38.202785 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:38.203087 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:38.203150 29302 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.14"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0914 19:06:38.339052 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.14
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0914 19:06:38.339081 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:38.341807 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.342226 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:38.342261 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:38.342430 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:38.342621 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.342798 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:38.342954 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:38.343119 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:38.343432 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:38.343461 29302 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0914 19:06:39.223778 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0914 19:06:39.223805 29302 machine.go:91] provisioned docker machine in 1.654241082s
I0914 19:06:39.223818 29302 start.go:300] post-start starting for "multinode-040952-m02" (driver="kvm2")
I0914 19:06:39.223828 29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0914 19:06:39.223843 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.224176 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0914 19:06:39.224211 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:39.226901 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.227247 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.227280 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.227544 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.227745 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.227911 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.228053 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:39.321534 29302 ssh_runner.go:195] Run: cat /etc/os-release
I0914 19:06:39.325932 29302 command_runner.go:130] > NAME=Buildroot
I0914 19:06:39.325948 29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
I0914 19:06:39.325957 29302 command_runner.go:130] > ID=buildroot
I0914 19:06:39.325962 29302 command_runner.go:130] > VERSION_ID=2021.02.12
I0914 19:06:39.325972 29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0914 19:06:39.326365 29302 info.go:137] Remote host: Buildroot 2021.02.12
I0914 19:06:39.326381 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
I0914 19:06:39.326432 29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
I0914 19:06:39.326501 29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
I0914 19:06:39.326513 29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
I0914 19:06:39.326584 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0914 19:06:39.336967 29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
I0914 19:06:39.360557 29302 start.go:303] post-start completed in 136.725285ms
I0914 19:06:39.360581 29302 fix.go:56] fixHost completed within 20.520003113s
I0914 19:06:39.360605 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:39.362948 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.363269 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.363315 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.363388 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.363595 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.363783 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.363936 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.364099 29302 main.go:141] libmachine: Using SSH client type: native
I0914 19:06:39.364460 29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0914 19:06:39.364472 29302 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0914 19:06:39.486077 29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718399.434257584
I0914 19:06:39.486101 29302 fix.go:206] guest clock: 1694718399.434257584
I0914 19:06:39.486110 29302 fix.go:219] Guest: 2023-09-14 19:06:39.434257584 +0000 UTC Remote: 2023-09-14 19:06:39.360584834 +0000 UTC m=+78.429360914 (delta=73.67275ms)
I0914 19:06:39.486128 29302 fix.go:190] guest clock delta is within tolerance: 73.67275ms
I0914 19:06:39.486135 29302 start.go:83] releasing machines lock for "multinode-040952-m02", held for 20.645613984s
I0914 19:06:39.486160 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.486442 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
I0914 19:06:39.488972 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.489301 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.489321 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.491933 29302 out.go:177] * Found network options:
I0914 19:06:39.493577 29302 out.go:177] - NO_PROXY=192.168.39.14
W0914 19:06:39.495217 29302 proxy.go:119] fail to check proxy env: Error ip not in block
I0914 19:06:39.495254 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.495809 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.495995 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
I0914 19:06:39.496072 29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0914 19:06:39.496116 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
W0914 19:06:39.496205 29302 proxy.go:119] fail to check proxy env: Error ip not in block
I0914 19:06:39.496278 29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0914 19:06:39.496299 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
I0914 19:06:39.498773 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.498969 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.499150 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.499181 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.499303 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.499318 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
I0914 19:06:39.499348 29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
I0914 19:06:39.499474 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.499542 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
I0914 19:06:39.499625 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.499690 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
I0914 19:06:39.499747 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:39.499829 29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
I0914 19:06:39.499990 29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
I0914 19:06:39.587315 29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0914 19:06:39.587941 29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0914 19:06:39.588006 29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0914 19:06:39.610801 29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0914 19:06:39.610851 29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0914 19:06:39.610876 29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0914 19:06:39.610891 29302 start.go:469] detecting cgroup driver to use...
I0914 19:06:39.610989 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:06:39.629605 29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0914 19:06:39.630150 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0914 19:06:39.641201 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0914 19:06:39.651880 29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0914 19:06:39.651937 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0914 19:06:39.663251 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:06:39.674202 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0914 19:06:39.685211 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 19:06:39.696908 29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0914 19:06:39.709126 29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0914 19:06:39.721014 29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0914 19:06:39.731728 29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0914 19:06:39.731788 29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0914 19:06:39.742220 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:06:39.854266 29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0914 19:06:39.871417 29302 start.go:469] detecting cgroup driver to use...
I0914 19:06:39.871488 29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0914 19:06:39.884609 29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0914 19:06:39.884650 29302 command_runner.go:130] > [Unit]
I0914 19:06:39.884657 29302 command_runner.go:130] > Description=Docker Application Container Engine
I0914 19:06:39.884663 29302 command_runner.go:130] > Documentation=https://docs.docker.com
I0914 19:06:39.884669 29302 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0914 19:06:39.884677 29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0914 19:06:39.884682 29302 command_runner.go:130] > StartLimitBurst=3
I0914 19:06:39.884689 29302 command_runner.go:130] > StartLimitIntervalSec=60
I0914 19:06:39.884693 29302 command_runner.go:130] > [Service]
I0914 19:06:39.884698 29302 command_runner.go:130] > Type=notify
I0914 19:06:39.884702 29302 command_runner.go:130] > Restart=on-failure
I0914 19:06:39.884708 29302 command_runner.go:130] > Environment=NO_PROXY=192.168.39.14
I0914 19:06:39.884715 29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0914 19:06:39.884726 29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0914 19:06:39.884735 29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0914 19:06:39.884743 29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0914 19:06:39.884752 29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0914 19:06:39.884761 29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0914 19:06:39.884768 29302 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0914 19:06:39.884787 29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0914 19:06:39.884796 29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0914 19:06:39.884802 29302 command_runner.go:130] > ExecStart=
I0914 19:06:39.884821 29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0914 19:06:39.884831 29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0914 19:06:39.884838 29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0914 19:06:39.884845 29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0914 19:06:39.884852 29302 command_runner.go:130] > LimitNOFILE=infinity
I0914 19:06:39.884856 29302 command_runner.go:130] > LimitNPROC=infinity
I0914 19:06:39.884862 29302 command_runner.go:130] > LimitCORE=infinity
I0914 19:06:39.884867 29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0914 19:06:39.884875 29302 command_runner.go:130] > # Only systemd 226 and above support this version.
I0914 19:06:39.884879 29302 command_runner.go:130] > TasksMax=infinity
I0914 19:06:39.884888 29302 command_runner.go:130] > TimeoutStartSec=0
I0914 19:06:39.884894 29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0914 19:06:39.884898 29302 command_runner.go:130] > Delegate=yes
I0914 19:06:39.884905 29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0914 19:06:39.884917 29302 command_runner.go:130] > KillMode=process
I0914 19:06:39.884923 29302 command_runner.go:130] > [Install]
I0914 19:06:39.884929 29302 command_runner.go:130] > WantedBy=multi-user.target
I0914 19:06:39.885921 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:06:39.902340 29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0914 19:06:39.919241 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 19:06:39.931882 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:06:39.944141 29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0914 19:06:39.980328 29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 19:06:39.993054 29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 19:06:40.010119 29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0914 19:06:40.010413 29302 ssh_runner.go:195] Run: which cri-dockerd
I0914 19:06:40.014171 29302 command_runner.go:130] > /usr/bin/cri-dockerd
I0914 19:06:40.014287 29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0914 19:06:40.024688 29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0914 19:06:40.042167 29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0914 19:06:40.160404 29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0914 19:06:40.272827 29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0914 19:06:40.272855 29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0914 19:06:40.289795 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:06:40.398781 29302 ssh_runner.go:195] Run: sudo systemctl restart docker
I0914 19:06:41.803191 29302 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.40437357s)
I0914 19:06:41.803251 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:06:41.905435 29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0914 19:06:42.032291 29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0914 19:06:42.160622 29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 19:06:42.277173 29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0914 19:06:42.292786 29302 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I0914 19:06:42.294889 29302 out.go:177]
W0914 19:06:42.296193 29302 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0914 19:06:42.296210 29302 out.go:239] *
W0914 19:06:42.297001 29302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 19:06:42.298210 29302 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Thu 2023-09-14 19:05:32 UTC, ends at Thu 2023-09-14 19:06:43 UTC. --
Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110721289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110740258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110748982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.560125431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.561439001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.561948132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.562497172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912088487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912140403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912165447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912176351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:11 multinode-040952 cri-dockerd[1047]: time="2023-09-14T19:06:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c5adb06ad8644fdaa00404169cd62847107a188941b235afcd96bc74a471f36/resolv.conf as [nameserver 192.168.122.1]"
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248847029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248915066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248934609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248946671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:11 multinode-040952 cri-dockerd[1047]: time="2023-09-14T19:06:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b65f9b32fcb4cf47bc4f4ec371810e2c59f9379e67003f5d435073d09f33200/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746238437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746301425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746320987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746384615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.567374268Z" level=info msg="shim disconnected" id=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 namespace=moby
Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.568816508Z" level=warning msg="cleaning up after shim disconnected" id=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 namespace=moby
Sep 14 19:06:34 multinode-040952 dockerd[827]: time="2023-09-14T19:06:34.569676835Z" level=info msg="ignoring event" container=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.570344420Z" level=info msg="cleaning up dead shim" namespace=moby
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
45c401009e903 8c811b4aec35f 32 seconds ago Running busybox 1 9b65f9b32fcb4
d8bb85ef502bc ead0a4a53df89 32 seconds ago Running coredns 1 8c5adb06ad864
b3f4888d47e37 c7d1297425461 37 seconds ago Running kindnet-cni 1 ecedcc81d5040
c9e2f6411addd 6e38f40d628db 39 seconds ago Exited storage-provisioner 1 6517274d37d45
9057a95faf814 6cdbabde3874e 40 seconds ago Running kube-proxy 1 baaaa29d51d71
1c691ff0fb1dc b462ce0c8b1ff 44 seconds ago Running kube-scheduler 1 a2717cfc7b703
d2a4b9fbe6163 73deb9a3f7025 45 seconds ago Running etcd 1 8003d9c05224c
b6362a20e1ba8 5c801295c21d0 45 seconds ago Running kube-apiserver 1 d62732c77e111
7551a7f5f8d28 821b3dfea27be 45 seconds ago Running kube-controller-manager 1 d33e8c5c8b80c
b2201408c190d gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 3 minutes ago Exited busybox 0 606d676847d38
5ca168b256eca ead0a4a53df89 4 minutes ago Exited coredns 0 fb2dbcea99e9f
1dac2d18ee960 kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 4 minutes ago Exited kindnet-cni 0 2c6b193d8f06a
bd14e8416f22e 6cdbabde3874e 4 minutes ago Exited kube-proxy 0 ac89590af9af7
e7dd2a8d2bf2a b462ce0c8b1ff 5 minutes ago Exited kube-scheduler 0 3204588282f3d
79de1cbad023f 73deb9a3f7025 5 minutes ago Exited etcd 0 992d221cf3de6
bdae306df7741 821b3dfea27be 5 minutes ago Exited kube-controller-manager 0 c60a4b7edf2a5
7ae1932584ffa 5c801295c21d0 5 minutes ago Exited kube-apiserver 0 bf69af78fefd5
*
* ==> coredns [5ca168b256ec] <==
* [INFO] 10.244.1.2:34807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001920386s
[INFO] 10.244.1.2:58373 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223623s
[INFO] 10.244.1.2:34744 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097963s
[INFO] 10.244.1.2:42669 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00110869s
[INFO] 10.244.1.2:49456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084315s
[INFO] 10.244.1.2:36531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105982s
[INFO] 10.244.1.2:44052 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073712s
[INFO] 10.244.0.3:53028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102025s
[INFO] 10.244.0.3:60397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219163s
[INFO] 10.244.0.3:58611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119555s
[INFO] 10.244.0.3:56794 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000389586s
[INFO] 10.244.1.2:57290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238838s
[INFO] 10.244.1.2:38598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112648s
[INFO] 10.244.1.2:36747 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130289s
[INFO] 10.244.1.2:44678 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130001s
[INFO] 10.244.0.3:56148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000416563s
[INFO] 10.244.0.3:48925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015457s
[INFO] 10.244.0.3:37027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000266436s
[INFO] 10.244.0.3:58029 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132942s
[INFO] 10.244.1.2:32850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167159s
[INFO] 10.244.1.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075407s
[INFO] 10.244.1.2:33878 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077018s
[INFO] 10.244.1.2:33144 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119325s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [d8bb85ef502b] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:51360 - 19367 "HINFO IN 781133024460292738.4424492601979386444. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021489339s
*
* ==> describe nodes <==
* Name: multinode-040952
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-040952
kubernetes.io/os=linux
minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b
minikube.k8s.io/name=multinode-040952
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_09_14T19_01_41_0700
minikube.k8s.io/version=v1.31.2
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 14 Sep 2023 19:01:37 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-040952
AcquireTime: <unset>
RenewTime: Thu, 14 Sep 2023 19:06:33 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 14 Sep 2023 19:06:09 +0000 Thu, 14 Sep 2023 19:01:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Sep 2023 19:06:09 +0000 Thu, 14 Sep 2023 19:01:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Sep 2023 19:06:09 +0000 Thu, 14 Sep 2023 19:01:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Sep 2023 19:06:09 +0000 Thu, 14 Sep 2023 19:06:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.14
Hostname: multinode-040952
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: a22e570b53364d97906f6fbadc119046
System UUID: a22e570b-5336-4d97-906f-6fbadc119046
Boot ID: 805cf3f0-f992-49df-b9c1-1c815bc938ec
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-8xj5t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m32s
kube-system coredns-5dd5756b68-qrv2r 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 4m50s
kube-system etcd-multinode-040952 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 5m2s
kube-system kindnet-hvz8s 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 4m50s
kube-system kube-apiserver-multinode-040952 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m2s
kube-system kube-controller-manager-multinode-040952 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m2s
kube-system kube-proxy-hbsmt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m50s
kube-system kube-scheduler-multinode-040952 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m5s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m48s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m48s kube-proxy
Normal Starting 39s kube-proxy
Normal NodeHasSufficientMemory 5m11s (x8 over 5m11s) kubelet Node multinode-040952 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m11s (x8 over 5m11s) kubelet Node multinode-040952 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m11s (x7 over 5m11s) kubelet Node multinode-040952 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m11s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 5m3s kubelet Node multinode-040952 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 5m3s kubelet Node multinode-040952 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 5m3s kubelet Node multinode-040952 status is now: NodeHasSufficientPID
Normal Starting 5m3s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m2s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 4m50s node-controller Node multinode-040952 event: Registered Node multinode-040952 in Controller
Normal NodeReady 4m38s kubelet Node multinode-040952 status is now: NodeReady
Normal Starting 47s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 47s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 46s (x8 over 47s) kubelet Node multinode-040952 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 46s (x8 over 47s) kubelet Node multinode-040952 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 46s (x7 over 47s) kubelet Node multinode-040952 status is now: NodeHasSufficientPID
Normal RegisteredNode 29s node-controller Node multinode-040952 event: Registered Node multinode-040952 in Controller
Name: multinode-040952-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-040952-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 14 Sep 2023 19:02:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-040952-m02
AcquireTime: <unset>
RenewTime: Thu, 14 Sep 2023 19:04:48 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 14 Sep 2023 19:03:27 +0000 Thu, 14 Sep 2023 19:02:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Sep 2023 19:03:27 +0000 Thu, 14 Sep 2023 19:02:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Sep 2023 19:03:27 +0000 Thu, 14 Sep 2023 19:02:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Sep 2023 19:03:27 +0000 Thu, 14 Sep 2023 19:03:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.16
Hostname: multinode-040952-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 275cf71437384b3685d193f4ccec91cc
System UUID: 275cf714-3738-4b36-85d1-93f4ccec91cc
Boot ID: 9d1451db-6918-461e-9cc4-16724afd48c4
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-msf7r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m32s
kube-system kindnet-lrkhw 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 3m47s
kube-system kube-proxy-gldkh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m47s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m41s kube-proxy
Normal Starting 3m47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m47s (x2 over 3m47s) kubelet Node multinode-040952-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m47s (x2 over 3m47s) kubelet Node multinode-040952-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m47s (x2 over 3m47s) kubelet Node multinode-040952-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m47s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m45s node-controller Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller
Normal NodeReady 3m34s kubelet Node multinode-040952-m02 status is now: NodeReady
Normal RegisteredNode 29s node-controller Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller
Name: multinode-040952-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-040952-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 14 Sep 2023 19:04:41 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-040952-m03
AcquireTime: <unset>
RenewTime: Thu, 14 Sep 2023 19:04:51 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 14 Sep 2023 19:04:49 +0000 Thu, 14 Sep 2023 19:04:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Sep 2023 19:04:49 +0000 Thu, 14 Sep 2023 19:04:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Sep 2023 19:04:49 +0000 Thu, 14 Sep 2023 19:04:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Sep 2023 19:04:49 +0000 Thu, 14 Sep 2023 19:04:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.107
Hostname: multinode-040952-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: f4a36b25533f44c6ba83b2c2bb7581e2
System UUID: f4a36b25-533f-44c6-ba83-b2c2bb7581e2
Boot ID: e31d5883-b5c3-4efd-a9c9-90546837ce6d
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-pjfsc 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 2m53s
kube-system kube-proxy-gpl2p 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m53s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m kube-proxy
Normal Starting 2m47s kube-proxy
Normal NodeHasSufficientMemory 2m53s (x5 over 2m54s) kubelet Node multinode-040952-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m53s (x5 over 2m54s) kubelet Node multinode-040952-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m53s (x5 over 2m54s) kubelet Node multinode-040952-m03 status is now: NodeHasSufficientPID
Normal NodeReady 2m37s kubelet Node multinode-040952-m03 status is now: NodeReady
Normal Starting 2m2s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m2s (x2 over 2m2s) kubelet Node multinode-040952-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m2s (x2 over 2m2s) kubelet Node multinode-040952-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m2s (x2 over 2m2s) kubelet Node multinode-040952-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m2s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 114s kubelet Node multinode-040952-m03 status is now: NodeReady
Normal RegisteredNode 29s node-controller Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller
*
* ==> dmesg <==
* [Sep14 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.071026] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.320578] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.256122] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.139451] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.741731] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.615091] systemd-fstab-generator[514]: Ignoring "noauto" for root device
[ +0.091320] systemd-fstab-generator[526]: Ignoring "noauto" for root device
[ +1.160091] systemd-fstab-generator[754]: Ignoring "noauto" for root device
[ +0.277522] systemd-fstab-generator[794]: Ignoring "noauto" for root device
[ +0.106300] systemd-fstab-generator[805]: Ignoring "noauto" for root device
[ +0.125747] systemd-fstab-generator[818]: Ignoring "noauto" for root device
[ +0.569199] systemd-fstab-generator[992]: Ignoring "noauto" for root device
[ +0.109950] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
[ +0.112895] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
[ +0.113984] systemd-fstab-generator[1025]: Ignoring "noauto" for root device
[ +0.119773] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
[ +11.953340] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
[ +0.384554] kauditd_printk_skb: 67 callbacks suppressed
*
* ==> etcd [79de1cbad023] <==
* {"level":"info","ts":"2023-09-14T19:01:36.01867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became leader at term 2"}
{"level":"info","ts":"2023-09-14T19:01:36.018676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 599035dfeb7e0476 elected leader 599035dfeb7e0476 at term 2"}
{"level":"info","ts":"2023-09-14T19:01:36.0202Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"599035dfeb7e0476","local-member-attributes":"{Name:multinode-040952 ClientURLs:[https://192.168.39.14:2379]}","request-path":"/0/members/599035dfeb7e0476/attributes","cluster-id":"7dcc0a60dbbc15a1","publish-timeout":"7s"}
{"level":"info","ts":"2023-09-14T19:01:36.020483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-14T19:01:36.020568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-14T19:01:36.022008Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-09-14T19:01:36.022275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-09-14T19:01:36.022291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-09-14T19:01:36.022636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.14:2379"}
{"level":"info","ts":"2023-09-14T19:01:36.022715Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-14T19:01:36.024658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-14T19:01:36.024747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-14T19:01:36.024765Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-14T19:03:51.807588Z","caller":"traceutil/trace.go:171","msg":"trace[23883446] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"129.206345ms","start":"2023-09-14T19:03:51.678265Z","end":"2023-09-14T19:03:51.807471Z","steps":["trace[23883446] 'process raft request' (duration: 129.086639ms)"],"step_count":1}
{"level":"info","ts":"2023-09-14T19:04:52.930829Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-09-14T19:04:52.930966Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-040952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
{"level":"warn","ts":"2023-09-14T19:04:52.931161Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2023-09-14T19:04:52.931257Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2023-09-14T19:04:52.932088Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
{"level":"warn","ts":"2023-09-14T19:04:52.952017Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
{"level":"warn","ts":"2023-09-14T19:04:52.952093Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
{"level":"info","ts":"2023-09-14T19:04:52.952149Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"599035dfeb7e0476","current-leader-member-id":"599035dfeb7e0476"}
{"level":"info","ts":"2023-09-14T19:04:52.955652Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.14:2380"}
{"level":"info","ts":"2023-09-14T19:04:52.955754Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.14:2380"}
{"level":"info","ts":"2023-09-14T19:04:52.955763Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-040952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
*
* ==> etcd [d2a4b9fbe616] <==
* {"level":"info","ts":"2023-09-14T19:05:59.734271Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-09-14T19:05:59.734297Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-09-14T19:05:59.740699Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-09-14T19:05:59.743953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 switched to configuration voters=(6453717501866804342)"}
{"level":"info","ts":"2023-09-14T19:05:59.746046Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","added-peer-id":"599035dfeb7e0476","added-peer-peer-urls":["https://192.168.39.14:2380"]}
{"level":"info","ts":"2023-09-14T19:05:59.746423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-14T19:05:59.746624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-14T19:05:59.744002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.14:2380"}
{"level":"info","ts":"2023-09-14T19:05:59.762875Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.14:2380"}
{"level":"info","ts":"2023-09-14T19:05:59.767737Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"599035dfeb7e0476","initial-advertise-peer-urls":["https://192.168.39.14:2380"],"listen-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.14:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-09-14T19:05:59.767794Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-09-14T19:06:00.733425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 is starting a new election at term 2"}
{"level":"info","ts":"2023-09-14T19:06:00.733712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became pre-candidate at term 2"}
{"level":"info","ts":"2023-09-14T19:06:00.73392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 received MsgPreVoteResp from 599035dfeb7e0476 at term 2"}
{"level":"info","ts":"2023-09-14T19:06:00.734128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became candidate at term 3"}
{"level":"info","ts":"2023-09-14T19:06:00.73421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 received MsgVoteResp from 599035dfeb7e0476 at term 3"}
{"level":"info","ts":"2023-09-14T19:06:00.734234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became leader at term 3"}
{"level":"info","ts":"2023-09-14T19:06:00.734355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 599035dfeb7e0476 elected leader 599035dfeb7e0476 at term 3"}
{"level":"info","ts":"2023-09-14T19:06:00.738829Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"599035dfeb7e0476","local-member-attributes":"{Name:multinode-040952 ClientURLs:[https://192.168.39.14:2379]}","request-path":"/0/members/599035dfeb7e0476/attributes","cluster-id":"7dcc0a60dbbc15a1","publish-timeout":"7s"}
{"level":"info","ts":"2023-09-14T19:06:00.739125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-14T19:06:00.739447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-09-14T19:06:00.739493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-09-14T19:06:00.739514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-14T19:06:00.740785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-09-14T19:06:00.740794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.14:2379"}
*
* ==> kernel <==
* 19:06:43 up 1 min, 0 users, load average: 1.27, 0.36, 0.12
Linux multinode-040952 5.10.57 #1 SMP Tue Sep 12 02:34:33 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [1dac2d18ee96] <==
* I0914 19:04:13.417146 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:04:13.417297 1 main.go:227] handling current node
I0914 19:04:13.417313 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:04:13.417322 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:04:13.417671 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:04:13.417972 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24]
I0914 19:04:23.424504 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:04:23.425037 1 main.go:227] handling current node
I0914 19:04:23.425203 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:04:23.425329 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:04:23.425757 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:04:23.425805 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24]
I0914 19:04:33.433351 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:04:33.433474 1 main.go:227] handling current node
I0914 19:04:33.433513 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:04:33.434156 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:04:33.434804 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:04:33.435075 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24]
I0914 19:04:43.456778 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:04:43.457185 1 main.go:227] handling current node
I0914 19:04:43.457215 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:04:43.457226 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:04:43.457383 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:04:43.457389 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24]
I0914 19:04:43.457441 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.107 Flags: [] Table: 0}
*
* ==> kindnet [b3f4888d47e3] <==
* I0914 19:06:08.275205 1 main.go:227] handling current node
I0914 19:06:08.275662 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:06:08.275676 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:06:08.275797 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.16 Flags: [] Table: 0}
I0914 19:06:08.275887 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:06:08.275896 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24]
I0914 19:06:08.275949 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.107 Flags: [] Table: 0}
I0914 19:06:18.290953 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:06:18.290991 1 main.go:227] handling current node
I0914 19:06:18.291009 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:06:18.291014 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:06:18.291123 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:06:18.291128 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24]
I0914 19:06:28.307114 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:06:28.307170 1 main.go:227] handling current node
I0914 19:06:28.307193 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:06:28.307199 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:06:28.307346 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:06:28.307381 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24]
I0914 19:06:38.313370 1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
I0914 19:06:38.313758 1 main.go:227] handling current node
I0914 19:06:38.314072 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0914 19:06:38.314290 1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24]
I0914 19:06:38.314714 1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
I0914 19:06:38.314906 1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24]
*
* ==> kube-apiserver [7ae1932584ff] <==
* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0914 19:05:02.925127 1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0914 19:05:02.938224 1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0914 19:05:02.943236 1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-apiserver [b6362a20e1ba] <==
* I0914 19:06:02.103379 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
I0914 19:06:02.103893 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0914 19:06:02.103947 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0914 19:06:02.227119 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0914 19:06:02.271711 1 shared_informer.go:318] Caches are synced for node_authorizer
I0914 19:06:02.304807 1 shared_informer.go:318] Caches are synced for crd-autoregister
I0914 19:06:02.304872 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0914 19:06:02.305849 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0914 19:06:02.305890 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0914 19:06:02.306061 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0914 19:06:02.331297 1 shared_informer.go:318] Caches are synced for configmaps
I0914 19:06:02.331358 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0914 19:06:02.335150 1 aggregator.go:166] initial CRD sync complete...
I0914 19:06:02.335193 1 autoregister_controller.go:141] Starting autoregister controller
I0914 19:06:02.335200 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0914 19:06:02.335206 1 cache.go:39] Caches are synced for autoregister controller
I0914 19:06:03.100463 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0914 19:06:03.368706 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.14]
I0914 19:06:03.370054 1 controller.go:624] quota admission added evaluator for: endpoints
I0914 19:06:03.376360 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0914 19:06:05.169364 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0914 19:06:05.329658 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0914 19:06:05.341332 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0914 19:06:05.419400 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0914 19:06:05.426410 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
*
* ==> kube-controller-manager [7551a7f5f8d2] <==
* I0914 19:06:14.661322 1 shared_informer.go:318] Caches are synced for ReplicationController
I0914 19:06:14.661435 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0914 19:06:14.661442 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
I0914 19:06:14.664911 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
I0914 19:06:14.667650 1 shared_informer.go:318] Caches are synced for job
I0914 19:06:14.678438 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
I0914 19:06:14.684625 1 shared_informer.go:318] Caches are synced for endpoint
I0914 19:06:14.710898 1 shared_informer.go:318] Caches are synced for attach detach
I0914 19:06:14.717414 1 shared_informer.go:318] Caches are synced for daemon sets
I0914 19:06:14.743422 1 shared_informer.go:318] Caches are synced for taint
I0914 19:06:14.743617 1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
I0914 19:06:14.744935 1 event.go:307] "Event occurred" object="multinode-040952" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952 event: Registered Node multinode-040952 in Controller"
I0914 19:06:14.744976 1 event.go:307] "Event occurred" object="multinode-040952-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller"
I0914 19:06:14.744985 1 event.go:307] "Event occurred" object="multinode-040952-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller"
I0914 19:06:14.747755 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0914 19:06:14.747973 1 shared_informer.go:318] Caches are synced for resource quota
I0914 19:06:14.748234 1 taint_manager.go:211] "Sending events to api server"
I0914 19:06:14.755944 1 shared_informer.go:318] Caches are synced for resource quota
I0914 19:06:14.758787 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952"
I0914 19:06:14.759112 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m02"
I0914 19:06:14.759307 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m03"
I0914 19:06:14.761326 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
I0914 19:06:15.192335 1 shared_informer.go:318] Caches are synced for garbage collector
I0914 19:06:15.196730 1 shared_informer.go:318] Caches are synced for garbage collector
I0914 19:06:15.196764 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
*
* ==> kube-controller-manager [bdae306df774] <==
* I0914 19:03:11.800269 1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
I0914 19:03:11.822032 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-msf7r"
I0914 19:03:11.832933 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8xj5t"
I0914 19:03:11.858800 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.587243ms"
I0914 19:03:11.881601 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.674253ms"
I0914 19:03:11.911272 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.257061ms"
I0914 19:03:11.911865 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="129.703µs"
I0914 19:03:13.323606 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-msf7r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-msf7r"
I0914 19:03:14.759110 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.215323ms"
I0914 19:03:14.759979 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.128µs"
I0914 19:03:15.674480 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.700191ms"
I0914 19:03:15.674657 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.358µs"
I0914 19:03:50.546206 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
I0914 19:03:50.547815 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-040952-m03\" does not exist"
I0914 19:03:50.566383 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gpl2p"
I0914 19:03:50.573363 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pjfsc"
I0914 19:03:50.579177 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-040952-m03" podCIDRs=["10.244.2.0/24"]
I0914 19:03:53.329628 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m03"
I0914 19:03:53.330341 1 event.go:307] "Event occurred" object="multinode-040952-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller"
I0914 19:04:06.424965 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
I0914 19:04:40.617462 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
I0914 19:04:41.474271 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
I0914 19:04:41.476212 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-040952-m03\" does not exist"
I0914 19:04:41.488035 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-040952-m03" podCIDRs=["10.244.3.0/24"]
I0914 19:04:49.789872 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
*
* ==> kube-proxy [9057a95faf81] <==
* I0914 19:06:04.144375 1 server_others.go:69] "Using iptables proxy"
I0914 19:06:04.170724 1 node.go:141] Successfully retrieved node IP: 192.168.39.14
I0914 19:06:04.450059 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0914 19:06:04.450082 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0914 19:06:04.458361 1 server_others.go:152] "Using iptables Proxier"
I0914 19:06:04.459621 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0914 19:06:04.460661 1 server.go:846] "Version info" version="v1.28.1"
I0914 19:06:04.461096 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0914 19:06:04.466061 1 config.go:188] "Starting service config controller"
I0914 19:06:04.466932 1 shared_informer.go:311] Waiting for caches to sync for service config
I0914 19:06:04.467389 1 config.go:97] "Starting endpoint slice config controller"
I0914 19:06:04.467710 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0914 19:06:04.469390 1 config.go:315] "Starting node config controller"
I0914 19:06:04.469898 1 shared_informer.go:311] Waiting for caches to sync for node config
I0914 19:06:04.568257 1 shared_informer.go:318] Caches are synced for service config
I0914 19:06:04.568320 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0914 19:06:04.571747 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-proxy [bd14e8416f22] <==
* I0914 19:01:54.607139 1 server_others.go:69] "Using iptables proxy"
I0914 19:01:54.619412 1 node.go:141] Successfully retrieved node IP: 192.168.39.14
I0914 19:01:54.687340 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0914 19:01:54.687387 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0914 19:01:54.690390 1 server_others.go:152] "Using iptables Proxier"
I0914 19:01:54.690676 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0914 19:01:54.690863 1 server.go:846] "Version info" version="v1.28.1"
I0914 19:01:54.690874 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0914 19:01:54.691425 1 config.go:188] "Starting service config controller"
I0914 19:01:54.691480 1 shared_informer.go:311] Waiting for caches to sync for service config
I0914 19:01:54.691505 1 config.go:97] "Starting endpoint slice config controller"
I0914 19:01:54.691634 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0914 19:01:54.693270 1 config.go:315] "Starting node config controller"
I0914 19:01:54.693313 1 shared_informer.go:311] Waiting for caches to sync for node config
I0914 19:01:54.792627 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0914 19:01:54.792662 1 shared_informer.go:318] Caches are synced for service config
I0914 19:01:54.793421 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-scheduler [1c691ff0fb1d] <==
* I0914 19:06:00.284533 1 serving.go:348] Generated self-signed cert in-memory
W0914 19:06:02.177631 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0914 19:06:02.177821 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0914 19:06:02.178051 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0914 19:06:02.178277 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0914 19:06:02.270392 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
I0914 19:06:02.270853 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0914 19:06:02.286074 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0914 19:06:02.290157 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0914 19:06:02.290663 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0914 19:06:02.290679 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0914 19:06:02.393949 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [e7dd2a8d2bf2] <==
* E0914 19:01:37.477320 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0914 19:01:37.477458 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0914 19:01:37.477507 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0914 19:01:38.288201 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0914 19:01:38.288230 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0914 19:01:38.315971 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0914 19:01:38.315998 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0914 19:01:38.401116 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0914 19:01:38.401259 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0914 19:01:38.486649 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0914 19:01:38.486726 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0914 19:01:38.559583 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0914 19:01:38.559638 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0914 19:01:38.654661 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0914 19:01:38.654763 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0914 19:01:38.746863 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0914 19:01:38.747118 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0914 19:01:38.748736 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0914 19:01:38.749082 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0914 19:01:38.759272 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0914 19:01:38.759300 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0914 19:01:40.363415 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0914 19:04:52.977252 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0914 19:04:52.977363 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0914 19:04:52.977770 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Thu 2023-09-14 19:05:32 UTC, ends at Thu 2023-09-14 19:06:44 UTC. --
Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.334153 1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.334219 1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:04.334203478 +0000 UTC m=+7.832049981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435647 1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435679 1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435727 1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:04.435713596 +0000 UTC m=+7.933560098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.343855 1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.343919 1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:06.343905485 +0000 UTC m=+9.841751999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.444793 1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.444924 1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.445066 1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:06.445023628 +0000 UTC m=+9.942870143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.836832 1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8xj5t" podUID="a8ee02a0-c9ae-454d-902d-c10e99f35812"
Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.836934 1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-qrv2r" podUID="f9293d00-1000-4ffa-b978-d08c00eee7e7"
Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.360509 1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.360711 1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:10.360695397 +0000 UTC m=+13.858541911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461710 1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461760 1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461858 1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:10.461842696 +0000 UTC m=+13.959689202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
Sep 14 19:06:06 multinode-040952 kubelet[1290]: I0914 19:06:06.956674 1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecedcc81d5040d88abcafe724d7ff2140b999b458d0e93f11b00ad6783066a7b"
Sep 14 19:06:08 multinode-040952 kubelet[1290]: E0914 19:06:08.069490 1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8xj5t" podUID="a8ee02a0-c9ae-454d-902d-c10e99f35812"
Sep 14 19:06:08 multinode-040952 kubelet[1290]: E0914 19:06:08.077183 1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-qrv2r" podUID="f9293d00-1000-4ffa-b978-d08c00eee7e7"
Sep 14 19:06:09 multinode-040952 kubelet[1290]: I0914 19:06:09.602526 1290 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Sep 14 19:06:11 multinode-040952 kubelet[1290]: I0914 19:06:11.624814 1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b65f9b32fcb4cf47bc4f4ec371810e2c59f9379e67003f5d435073d09f33200"
Sep 14 19:06:34 multinode-040952 kubelet[1290]: I0914 19:06:34.964746 1290 scope.go:117] "RemoveContainer" containerID="bda018c9a602e0ece971914d9996bb4c59847a4417bdfa7d7cfee531dbe1b929"
Sep 14 19:06:34 multinode-040952 kubelet[1290]: I0914 19:06:34.965104 1290 scope.go:117] "RemoveContainer" containerID="c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929"
Sep 14 19:06:34 multinode-040952 kubelet[1290]: E0914 19:06:34.965323 1290 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8f25fe5b-237f-415a-baca-e4342106bb4d)\"" pod="kube-system/storage-provisioner" podUID="8f25fe5b-237f-415a-baca-e4342106bb4d"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-040952 -n multinode-040952
helpers_test.go:261: (dbg) Run: kubectl --context multinode-040952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (112.17s)